Apr 12 18:28:28.021898 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 12 18:28:28.021917 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Apr 12 17:21:24 -00 2024 Apr 12 18:28:28.021926 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 12 18:28:28.021935 kernel: printk: bootconsole [pl11] enabled Apr 12 18:28:28.021941 kernel: efi: EFI v2.70 by EDK II Apr 12 18:28:28.021946 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37b33f98 Apr 12 18:28:28.021953 kernel: random: crng init done Apr 12 18:28:28.021959 kernel: ACPI: Early table checksum verification disabled Apr 12 18:28:28.021964 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Apr 12 18:28:28.021970 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.021978 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.021985 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 12 18:28:28.021991 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.021996 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.022003 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.022009 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.022015 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.022023 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.022031 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 12 18:28:28.022037 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:28.022043 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 12 18:28:28.022049 kernel: NUMA: Failed to initialise from firmware Apr 12 18:28:28.022055 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Apr 12 18:28:28.022061 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Apr 12 18:28:28.022067 kernel: Zone ranges: Apr 12 18:28:28.022072 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 12 18:28:28.022080 kernel: DMA32 empty Apr 12 18:28:28.022088 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 12 18:28:28.022094 kernel: Movable zone start for each node Apr 12 18:28:28.022100 kernel: Early memory node ranges Apr 12 18:28:28.022105 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 12 18:28:28.022111 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Apr 12 18:28:28.022117 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Apr 12 18:28:28.022125 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Apr 12 18:28:28.022131 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Apr 12 18:28:28.022137 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Apr 12 18:28:28.022143 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Apr 12 18:28:28.022149 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Apr 12 18:28:28.022155 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 12 18:28:28.022162 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 12 18:28:28.022171 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 12 18:28:28.022177 kernel: psci: probing for conduit method from ACPI. Apr 12 18:28:28.022186 kernel: psci: PSCIv1.1 detected in firmware. Apr 12 18:28:28.022194 kernel: psci: Using standard PSCI v0.2 function IDs Apr 12 18:28:28.022202 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 12 18:28:28.022208 kernel: psci: SMC Calling Convention v1.4 Apr 12 18:28:28.022215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Apr 12 18:28:28.022221 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Apr 12 18:28:28.022228 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Apr 12 18:28:28.022234 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Apr 12 18:28:28.022241 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 12 18:28:28.022247 kernel: Detected PIPT I-cache on CPU0 Apr 12 18:28:28.022256 kernel: CPU features: detected: GIC system register CPU interface Apr 12 18:28:28.022262 kernel: CPU features: detected: Hardware dirty bit management Apr 12 18:28:28.022269 kernel: CPU features: detected: Spectre-BHB Apr 12 18:28:28.022275 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 12 18:28:28.022283 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 12 18:28:28.022289 kernel: CPU features: detected: ARM erratum 1418040 Apr 12 18:28:28.022295 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 12 18:28:28.022304 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 12 18:28:28.022310 kernel: Policy zone: Normal Apr 12 18:28:28.022318 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:28:28.022325 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:28:28.022331 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:28:28.022338 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:28:28.022347 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:28:28.022354 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Apr 12 18:28:28.022361 kernel: Memory: 3990264K/4194160K available (9792K kernel code, 2092K rwdata, 7568K rodata, 36352K init, 777K bss, 203896K reserved, 0K cma-reserved) Apr 12 18:28:28.022367 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 12 18:28:28.022373 kernel: trace event string verifier disabled Apr 12 18:28:28.022380 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 12 18:28:28.022388 kernel: rcu: RCU event tracing is enabled. Apr 12 18:28:28.022395 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 12 18:28:28.022401 kernel: Trampoline variant of Tasks RCU enabled. Apr 12 18:28:28.022408 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:28:28.022414 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:28:28.022421 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 12 18:28:28.022431 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 12 18:28:28.022437 kernel: GICv3: 960 SPIs implemented Apr 12 18:28:28.022443 kernel: GICv3: 0 Extended SPIs implemented Apr 12 18:28:28.022471 kernel: GICv3: Distributor has no Range Selector support Apr 12 18:28:28.022479 kernel: Root IRQ handler: gic_handle_irq Apr 12 18:28:28.022488 kernel: GICv3: 16 PPIs implemented Apr 12 18:28:28.022495 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 12 18:28:28.022501 kernel: ITS: No ITS available, not enabling LPIs Apr 12 18:28:28.022508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:28.022514 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 12 18:28:28.022521 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 12 18:28:28.022527 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 12 18:28:28.022542 kernel: Console: colour dummy device 80x25 Apr 12 18:28:28.022549 kernel: printk: console [tty1] enabled Apr 12 18:28:28.022555 kernel: ACPI: Core revision 20210730 Apr 12 18:28:28.022562 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 12 18:28:28.022569 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:28:28.022575 kernel: LSM: Security Framework initializing Apr 12 18:28:28.022581 kernel: SELinux: Initializing. Apr 12 18:28:28.022591 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:28:28.022597 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:28:28.022605 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 12 18:28:28.022612 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Apr 12 18:28:28.022618 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:28:28.022625 kernel: Remapping and enabling EFI services. Apr 12 18:28:28.022646 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:28:28.022652 kernel: Detected PIPT I-cache on CPU1 Apr 12 18:28:28.022659 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 12 18:28:28.022666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:28.022672 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 12 18:28:28.022680 kernel: smp: Brought up 1 node, 2 CPUs Apr 12 18:28:28.022687 kernel: SMP: Total of 2 processors activated. Apr 12 18:28:28.022693 kernel: CPU features: detected: 32-bit EL0 Support Apr 12 18:28:28.022704 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 12 18:28:28.022711 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 12 18:28:28.022722 kernel: CPU features: detected: CRC32 instructions Apr 12 18:28:28.022729 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 12 18:28:28.022735 kernel: CPU features: detected: LSE atomic instructions Apr 12 18:28:28.022742 kernel: CPU features: detected: Privileged Access Never Apr 12 18:28:28.022754 kernel: CPU: All CPU(s) started at EL1 Apr 12 18:28:28.022761 kernel: alternatives: patching kernel code Apr 12 18:28:28.022775 kernel: devtmpfs: initialized Apr 12 18:28:28.022784 kernel: KASLR enabled Apr 12 18:28:28.022791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:28:28.022798 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 12 18:28:28.022804 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:28:28.022811 kernel: SMBIOS 3.1.0 present. Apr 12 18:28:28.022822 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Apr 12 18:28:28.022830 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:28:28.022838 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 12 18:28:28.022845 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 12 18:28:28.022855 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 12 18:28:28.022863 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:28:28.022869 kernel: audit: type=2000 audit(0.087:1): state=initialized audit_enabled=0 res=1 Apr 12 18:28:28.022876 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:28:28.022883 kernel: cpuidle: using governor menu Apr 12 18:28:28.022895 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 12 18:28:28.022902 kernel: ASID allocator initialised with 32768 entries Apr 12 18:28:28.022908 kernel: ACPI: bus type PCI registered Apr 12 18:28:28.022918 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:28:28.022926 kernel: Serial: AMBA PL011 UART driver Apr 12 18:28:28.022932 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:28:28.022943 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Apr 12 18:28:28.022950 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:28:28.022957 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Apr 12 18:28:28.022965 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:28:28.022980 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 12 18:28:28.022987 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:28:28.022993 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:28:28.023000 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:28:28.023007 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:28:28.023014 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:28:28.023021 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:28:28.023027 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:28:28.023036 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:28:28.023043 kernel: ACPI: Interpreter enabled Apr 12 18:28:28.023050 kernel: ACPI: Using GIC for interrupt routing Apr 12 18:28:28.023057 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 12 18:28:28.023063 kernel: printk: console [ttyAMA0] enabled Apr 12 18:28:28.023070 kernel: printk: bootconsole [pl11] disabled Apr 12 18:28:28.023077 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 12 18:28:28.023084 kernel: iommu: Default domain type: Translated Apr 12 18:28:28.023090 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 12 18:28:28.023098 kernel: vgaarb: loaded Apr 12 18:28:28.023105 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:28:28.023112 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:28:28.023119 kernel: PTP clock support registered Apr 12 18:28:28.023126 kernel: Registered efivars operations Apr 12 18:28:28.023133 kernel: No ACPI PMU IRQ for CPU0 Apr 12 18:28:28.023139 kernel: No ACPI PMU IRQ for CPU1 Apr 12 18:28:28.023146 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 12 18:28:28.023153 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:28:28.023161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:28:28.023168 kernel: pnp: PnP ACPI init Apr 12 18:28:28.023174 kernel: pnp: PnP ACPI: found 0 devices Apr 12 18:28:28.023181 kernel: NET: Registered PF_INET protocol family Apr 12 18:28:28.023188 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:28:28.023195 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:28:28.023202 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:28:28.023209 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:28:28.023216 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:28:28.023224 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:28:28.023231 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:28:28.023238 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:28:28.023245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:28:28.023252 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:28:28.023258 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 12 18:28:28.023265 kernel: kvm [1]: HYP mode not available Apr 12 18:28:28.023272 kernel: Initialise system trusted keyrings Apr 12 18:28:28.023279 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:28:28.023287 kernel: Key type asymmetric registered Apr 12 18:28:28.023293 kernel: Asymmetric key parser 'x509' registered Apr 12 18:28:28.023300 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:28:28.023307 kernel: io scheduler mq-deadline registered Apr 12 18:28:28.023314 kernel: io scheduler kyber registered Apr 12 18:28:28.023320 kernel: io scheduler bfq registered Apr 12 18:28:28.023327 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:28:28.023334 kernel: thunder_xcv, ver 1.0 Apr 12 18:28:28.023340 kernel: thunder_bgx, ver 1.0 Apr 12 18:28:28.023348 kernel: nicpf, ver 1.0 Apr 12 18:28:28.023355 kernel: nicvf, ver 1.0 Apr 12 18:28:28.023492 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 12 18:28:28.023560 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-04-12T18:28:27 UTC (1712946507) Apr 12 18:28:28.023570 kernel: efifb: probing for efifb Apr 12 18:28:28.023577 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 12 18:28:28.023584 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 12 18:28:28.023591 kernel: efifb: scrolling: redraw Apr 12 18:28:28.023601 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 12 18:28:28.023608 kernel: Console: switching to colour frame buffer device 128x48 Apr 12 18:28:28.023615 kernel: fb0: EFI VGA frame buffer device Apr 12 18:28:28.023622 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 12 18:28:28.023629 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 12 18:28:28.023636 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:28:28.023642 kernel: Segment Routing with IPv6 Apr 12 18:28:28.023649 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:28:28.023656 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:28:28.023664 kernel: Key type dns_resolver registered Apr 12 18:28:28.023670 kernel: registered taskstats version 1 Apr 12 18:28:28.023677 kernel: Loading compiled-in X.509 certificates Apr 12 18:28:28.023684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 8c258d82bbd8df4a9da2c0ea4108142f04be6b34' Apr 12 18:28:28.023691 kernel: Key type .fscrypt registered Apr 12 18:28:28.023698 kernel: Key type fscrypt-provisioning registered Apr 12 18:28:28.023705 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:28:28.023712 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:28:28.023718 kernel: ima: No architecture policies found Apr 12 18:28:28.023726 kernel: Freeing unused kernel memory: 36352K Apr 12 18:28:28.023733 kernel: Run /init as init process Apr 12 18:28:28.023740 kernel: with arguments: Apr 12 18:28:28.023746 kernel: /init Apr 12 18:28:28.023753 kernel: with environment: Apr 12 18:28:28.023759 kernel: HOME=/ Apr 12 18:28:28.023766 kernel: TERM=linux Apr 12 18:28:28.023773 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:28:28.023782 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:28:28.023792 systemd[1]: Detected virtualization microsoft. Apr 12 18:28:28.023800 systemd[1]: Detected architecture arm64. Apr 12 18:28:28.023807 systemd[1]: Running in initrd. Apr 12 18:28:28.023814 systemd[1]: No hostname configured, using default hostname. Apr 12 18:28:28.023821 systemd[1]: Hostname set to . Apr 12 18:28:28.023829 systemd[1]: Initializing machine ID from random generator. Apr 12 18:28:28.023836 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:28:28.023845 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:28:28.023852 systemd[1]: Reached target cryptsetup.target. Apr 12 18:28:28.023859 systemd[1]: Reached target paths.target. Apr 12 18:28:28.023867 systemd[1]: Reached target slices.target. Apr 12 18:28:28.023874 systemd[1]: Reached target swap.target. Apr 12 18:28:28.023881 systemd[1]: Reached target timers.target. Apr 12 18:28:28.023889 systemd[1]: Listening on iscsid.socket. Apr 12 18:28:28.023896 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:28:28.023905 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:28:28.023912 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:28:28.023920 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:28:28.023927 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:28:28.023934 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:28:28.023942 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:28:28.023949 systemd[1]: Reached target sockets.target. Apr 12 18:28:28.023956 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:28:28.023964 systemd[1]: Finished network-cleanup.service. Apr 12 18:28:28.023972 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:28:28.023980 systemd[1]: Starting systemd-journald.service... Apr 12 18:28:28.023987 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:28:28.023994 systemd[1]: Starting systemd-resolved.service... Apr 12 18:28:28.024001 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:28:28.024013 systemd-journald[236]: Journal started Apr 12 18:28:28.024052 systemd-journald[236]: Runtime Journal (/run/log/journal/f66b621bdd5740679a722b48df05c54a) is 8.0M, max 78.6M, 70.6M free. Apr 12 18:28:28.004493 systemd-modules-load[237]: Inserted module 'overlay' Apr 12 18:28:28.043753 systemd[1]: Started systemd-journald.service. Apr 12 18:28:28.044719 systemd-resolved[238]: Positive Trust Anchors: Apr 12 18:28:28.044736 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:28:28.094848 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:28:28.094871 kernel: audit: type=1130 audit(1712946508.070:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.044764 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:28:28.168769 kernel: audit: type=1130 audit(1712946508.101:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.168796 kernel: Bridge firewalling registered Apr 12 18:28:28.168806 kernel: SCSI subsystem initialized Apr 12 18:28:28.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.046967 systemd-resolved[238]: Defaulting to hostname 'linux'. Apr 12 18:28:28.094880 systemd[1]: Started systemd-resolved.service. Apr 12 18:28:28.102172 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:28:28.155523 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 12 18:28:28.246983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:28:28.247008 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:28:28.247045 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:28:28.247071 kernel: audit: type=1130 audit(1712946508.204:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.247081 kernel: audit: type=1130 audit(1712946508.225:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.205127 systemd-modules-load[237]: Inserted module 'dm_multipath' Apr 12 18:28:28.277908 kernel: audit: type=1130 audit(1712946508.253:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.205196 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:28:28.226347 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:28:28.326633 kernel: audit: type=1130 audit(1712946508.283:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.253808 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:28:28.283887 systemd[1]: Reached target nss-lookup.target. Apr 12 18:28:28.311332 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:28:28.322926 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:28:28.332533 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:28:28.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.354816 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:28:28.384031 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:28:28.393033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:28:28.440240 kernel: audit: type=1130 audit(1712946508.362:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.440270 kernel: audit: type=1130 audit(1712946508.392:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.440279 kernel: audit: type=1130 audit(1712946508.419:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.424225 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:28:28.451330 dracut-cmdline[258]: dracut-dracut-053 Apr 12 18:28:28.456666 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:28:28.521486 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:28:28.537493 kernel: iscsi: registered transport (tcp) Apr 12 18:28:28.557813 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:28:28.557879 kernel: QLogic iSCSI HBA Driver Apr 12 18:28:28.594668 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:28:28.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:28.599958 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:28:28.657476 kernel: raid6: neonx8 gen() 13809 MB/s Apr 12 18:28:28.673471 kernel: raid6: neonx8 xor() 10829 MB/s Apr 12 18:28:28.693469 kernel: raid6: neonx4 gen() 13565 MB/s Apr 12 18:28:28.714465 kernel: raid6: neonx4 xor() 11293 MB/s Apr 12 18:28:28.734462 kernel: raid6: neonx2 gen() 12965 MB/s Apr 12 18:28:28.754488 kernel: raid6: neonx2 xor() 10348 MB/s Apr 12 18:28:28.775474 kernel: raid6: neonx1 gen() 10536 MB/s Apr 12 18:28:28.795463 kernel: raid6: neonx1 xor() 8790 MB/s Apr 12 18:28:28.815495 kernel: raid6: int64x8 gen() 6275 MB/s Apr 12 18:28:28.836478 kernel: raid6: int64x8 xor() 3540 MB/s Apr 12 18:28:28.856473 kernel: raid6: int64x4 gen() 7212 MB/s Apr 12 18:28:28.876473 kernel: raid6: int64x4 xor() 3856 MB/s Apr 12 18:28:28.897464 kernel: raid6: int64x2 gen() 6150 MB/s Apr 12 18:28:28.917462 kernel: raid6: int64x2 xor() 3317 MB/s Apr 12 18:28:28.937475 kernel: raid6: int64x1 gen() 5047 MB/s Apr 12 18:28:28.962240 kernel: raid6: int64x1 xor() 2645 MB/s Apr 12 18:28:28.962256 kernel: raid6: using algorithm neonx8 gen() 13809 MB/s Apr 12 18:28:28.962265 kernel: raid6: .... xor() 10829 MB/s, rmw enabled Apr 12 18:28:28.967501 kernel: raid6: using neon recovery algorithm Apr 12 18:28:28.984465 kernel: xor: measuring software checksum speed Apr 12 18:28:28.992543 kernel: 8regs : 17308 MB/sec Apr 12 18:28:28.992555 kernel: 32regs : 20749 MB/sec Apr 12 18:28:28.996625 kernel: arm64_neon : 27873 MB/sec Apr 12 18:28:28.996645 kernel: xor: using function: arm64_neon (27873 MB/sec) Apr 12 18:28:29.056469 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Apr 12 18:28:29.066163 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:28:29.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:29.074000 audit: BPF prog-id=7 op=LOAD Apr 12 18:28:29.074000 audit: BPF prog-id=8 op=LOAD Apr 12 18:28:29.075207 systemd[1]: Starting systemd-udevd.service... Apr 12 18:28:29.092310 systemd-udevd[435]: Using default interface naming scheme 'v252'. Apr 12 18:28:29.097320 systemd[1]: Started systemd-udevd.service. Apr 12 18:28:29.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:29.110152 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:28:29.123549 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Apr 12 18:28:29.152800 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:28:29.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:29.158283 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:28:29.194325 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:28:29.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:29.250473 kernel: hv_vmbus: Vmbus version:5.3 Apr 12 18:28:29.272475 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 12 18:28:29.272528 kernel: hv_vmbus: registering driver hid_hyperv Apr 12 18:28:29.272538 kernel: hv_vmbus: registering driver hv_storvsc Apr 12 18:28:29.287662 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 12 18:28:29.287716 kernel: hv_vmbus: registering driver hv_netvsc Apr 12 18:28:29.287734 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 12 18:28:29.298457 kernel: scsi host1: storvsc_host_t Apr 12 18:28:29.298528 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 12 18:28:29.308962 kernel: scsi host0: storvsc_host_t Apr 12 18:28:29.315864 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 12 18:28:29.323470 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 12 18:28:29.340782 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 12 18:28:29.341001 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:28:29.342470 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 12 18:28:29.356296 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 12 18:28:29.356574 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 12 18:28:29.360599 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 12 18:28:29.368365 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 12 18:28:29.368601 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 12 18:28:29.377475 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:28:29.383472 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 12 18:28:29.394482 kernel: hv_netvsc 002248bb-0422-0022-48bb-0422002248bb eth0: VF slot 1 added Apr 12 18:28:29.402486 kernel: hv_vmbus: registering driver hv_pci Apr 12 18:28:29.411477 kernel: hv_pci 9cca04dc-85f6-455a-9c6a-ff9e13f7e17e: PCI VMBus probing: Using version 0x10004 Apr 12 18:28:29.426306 kernel: hv_pci 9cca04dc-85f6-455a-9c6a-ff9e13f7e17e: PCI host bridge to bus 85f6:00 Apr 12 18:28:29.426474 kernel: pci_bus 85f6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 12 18:28:29.426601 kernel: pci_bus 85f6:00: No busn resource found for root bus, will use [bus 00-ff] Apr 12 18:28:29.440541 kernel: pci 85f6:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 12 18:28:29.453352 kernel: pci 85f6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 12 18:28:29.474567 kernel: pci 85f6:00:02.0: enabling Extended Tags Apr 12 18:28:29.492565 kernel: pci 85f6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 85f6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 12 18:28:29.504163 kernel: pci_bus 85f6:00: busn_res: [bus 00-ff] end is updated to 00 Apr 12 18:28:29.504347 kernel: pci 85f6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 12 18:28:29.544472 kernel: mlx5_core 85f6:00:02.0: firmware version: 16.30.1284 Apr 12 18:28:29.700470 kernel: mlx5_core 85f6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Apr 12 18:28:29.755546 kernel: hv_netvsc 002248bb-0422-0022-48bb-0422002248bb eth0: VF registering: eth1 Apr 12 18:28:29.760464 kernel: mlx5_core 85f6:00:02.0 eth1: joined to eth0 Apr 12 18:28:29.771473 kernel: mlx5_core 85f6:00:02.0 enP34294s1: renamed from eth1 Apr 12 18:28:29.805363 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:28:29.847484 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (502) Apr 12 18:28:29.862567 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:28:30.100192 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:28:30.117148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:28:30.122736 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:28:30.134287 systemd[1]: Starting disk-uuid.service... Apr 12 18:28:30.162483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:28:30.169469 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:28:31.177042 disk-uuid[563]: The operation has completed successfully. Apr 12 18:28:31.182796 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:28:31.242171 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:28:31.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.242288 systemd[1]: Finished disk-uuid.service. Apr 12 18:28:31.257335 systemd[1]: Starting verity-setup.service... Apr 12 18:28:31.303540 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 12 18:28:31.681764 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:28:31.687228 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:28:31.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.694611 systemd[1]: Finished verity-setup.service. Apr 12 18:28:31.747468 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:28:31.747852 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:28:31.751593 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:28:31.752339 systemd[1]: Starting ignition-setup.service... Apr 12 18:28:31.767374 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:28:31.793019 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:31.793093 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:28:31.793120 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:28:31.850265 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:28:31.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.858000 audit: BPF prog-id=9 op=LOAD Apr 12 18:28:31.859283 systemd[1]: Starting systemd-networkd.service... Apr 12 18:28:31.884345 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:28:31.891843 systemd-networkd[804]: lo: Link UP Apr 12 18:28:31.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.891854 systemd-networkd[804]: lo: Gained carrier Apr 12 18:28:31.892664 systemd-networkd[804]: Enumeration completed Apr 12 18:28:31.895165 systemd[1]: Started systemd-networkd.service. Apr 12 18:28:31.895802 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:28:31.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.899721 systemd[1]: Reached target network.target. Apr 12 18:28:31.911372 systemd[1]: Starting iscsiuio.service... Apr 12 18:28:31.915670 systemd[1]: Started iscsiuio.service. Apr 12 18:28:31.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.948629 iscsid[813]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:28:31.948629 iscsid[813]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 18:28:31.948629 iscsid[813]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:28:31.948629 iscsid[813]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:28:31.948629 iscsid[813]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:28:31.948629 iscsid[813]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:28:31.948629 iscsid[813]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:28:32.077040 kernel: kauditd_printk_skb: 15 callbacks suppressed Apr 12 18:28:32.077069 kernel: audit: type=1130 audit(1712946511.983:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.923901 systemd[1]: Starting iscsid.service... Apr 12 18:28:31.940109 systemd[1]: Started iscsid.service. Apr 12 18:28:32.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.956251 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:28:32.109280 kernel: audit: type=1130 audit(1712946512.084:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:31.979026 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:28:31.989057 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:28:32.016394 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:28:32.036998 systemd[1]: Reached target remote-fs.target. Apr 12 18:28:32.056791 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:28:32.078193 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:28:32.150937 systemd[1]: Finished ignition-setup.service. Apr 12 18:28:32.179633 kernel: mlx5_core 85f6:00:02.0 enP34294s1: Link up Apr 12 18:28:32.179805 kernel: audit: type=1130 audit(1712946512.158:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:32.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:32.180043 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:28:32.197475 kernel: hv_netvsc 002248bb-0422-0022-48bb-0422002248bb eth0: Data path switched to VF: enP34294s1 Apr 12 18:28:32.197643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:28:32.203786 systemd-networkd[804]: enP34294s1: Link UP Apr 12 18:28:32.203866 systemd-networkd[804]: eth0: Link UP Apr 12 18:28:32.204003 systemd-networkd[804]: eth0: Gained carrier Apr 12 18:28:32.211509 systemd-networkd[804]: enP34294s1: Gained carrier Apr 12 18:28:32.225525 systemd-networkd[804]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:28:34.214700 systemd-networkd[804]: eth0: Gained IPv6LL Apr 12 18:28:35.296301 ignition[829]: Ignition 2.14.0 Apr 12 18:28:35.299499 ignition[829]: Stage: fetch-offline Apr 12 18:28:35.299585 ignition[829]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:35.299616 ignition[829]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:35.413192 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:35.413343 ignition[829]: parsed url from cmdline: "" Apr 12 18:28:35.419579 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:28:35.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.413347 ignition[829]: no config URL provided Apr 12 18:28:35.453709 kernel: audit: type=1130 audit(1712946515.424:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.425982 systemd[1]: Starting ignition-fetch.service... Apr 12 18:28:35.413352 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:28:35.413360 ignition[829]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:28:35.413366 ignition[829]: failed to fetch config: resource requires networking Apr 12 18:28:35.413479 ignition[829]: Ignition finished successfully Apr 12 18:28:35.452966 ignition[835]: Ignition 2.14.0 Apr 12 18:28:35.452973 ignition[835]: Stage: fetch Apr 12 18:28:35.453085 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:35.453104 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:35.465626 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:35.465767 ignition[835]: parsed url from cmdline: "" Apr 12 18:28:35.465771 ignition[835]: no config URL provided Apr 12 18:28:35.465776 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:28:35.465785 ignition[835]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:28:35.465815 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 12 18:28:35.494808 ignition[835]: GET result: OK Apr 12 18:28:35.494932 ignition[835]: config has been read from IMDS userdata Apr 12 18:28:35.494991 ignition[835]: parsing config with SHA512: 61bc8778549acb692796cde1af79661c9c06c3c68b27f06865a9d33c904b2073d0dc981ad96788a90b2c5fcaaeddc90abfb12223e1efcfc3162a4de4d8237bad Apr 12 18:28:35.549172 unknown[835]: fetched base config from "system" Apr 12 18:28:35.549189 unknown[835]: fetched base config from "system" Apr 12 18:28:35.549845 ignition[835]: fetch: fetch complete Apr 12 18:28:35.549194 unknown[835]: fetched user config from "azure" Apr 12 18:28:35.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.549850 ignition[835]: fetch: fetch passed Apr 12 18:28:35.592256 kernel: audit: type=1130 audit(1712946515.564:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.557625 systemd[1]: Finished ignition-fetch.service. Apr 12 18:28:35.549892 ignition[835]: Ignition finished successfully Apr 12 18:28:35.585276 systemd[1]: Starting ignition-kargs.service... Apr 12 18:28:35.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.596003 ignition[842]: Ignition 2.14.0 Apr 12 18:28:35.626810 kernel: audit: type=1130 audit(1712946515.608:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.605165 systemd[1]: Finished ignition-kargs.service. Apr 12 18:28:35.596010 ignition[842]: Stage: kargs Apr 12 18:28:35.596122 ignition[842]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:35.596142 ignition[842]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:35.598519 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:35.601407 ignition[842]: kargs: kargs passed Apr 12 18:28:35.642642 systemd[1]: Starting ignition-disks.service... Apr 12 18:28:35.601463 ignition[842]: Ignition finished successfully Apr 12 18:28:35.655302 ignition[848]: Ignition 2.14.0 Apr 12 18:28:35.655308 ignition[848]: Stage: disks Apr 12 18:28:35.655420 ignition[848]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:35.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.666725 systemd[1]: Finished ignition-disks.service. Apr 12 18:28:35.696560 kernel: audit: type=1130 audit(1712946515.671:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.655443 ignition[848]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:35.693054 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:28:35.658958 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:35.701201 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:28:35.665011 ignition[848]: disks: disks passed Apr 12 18:28:35.709402 systemd[1]: Reached target local-fs.target. Apr 12 18:28:35.665069 ignition[848]: Ignition finished successfully Apr 12 18:28:35.719589 systemd[1]: Reached target sysinit.target. Apr 12 18:28:35.729566 systemd[1]: Reached target basic.target. Apr 12 18:28:35.741110 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:28:35.814828 systemd-fsck[856]: ROOT: clean, 612/7326000 files, 481074/7359488 blocks Apr 12 18:28:35.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.828049 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:28:35.856074 kernel: audit: type=1130 audit(1712946515.832:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:35.838332 systemd[1]: Mounting sysroot.mount... Apr 12 18:28:35.871331 systemd[1]: Mounted sysroot.mount. Apr 12 18:28:35.878114 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:28:35.875053 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:28:35.916386 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:28:35.921162 systemd[1]: Starting flatcar-metadata-hostname.service... Apr 12 18:28:35.932650 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:28:35.932694 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:28:35.947422 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:28:36.031183 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:28:36.036558 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:28:36.058478 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (867) Apr 12 18:28:36.069848 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:36.069891 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:28:36.069911 initrd-setup-root[872]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:28:36.080413 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:28:36.085723 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:28:36.101219 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:28:36.135475 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:28:36.144964 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:28:37.197657 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:28:37.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.221670 systemd[1]: Starting ignition-mount.service... Apr 12 18:28:37.230896 kernel: audit: type=1130 audit(1712946517.201:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.230794 systemd[1]: Starting sysroot-boot.service... Apr 12 18:28:37.235887 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 18:28:37.236136 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 18:28:37.261357 ignition[935]: INFO : Ignition 2.14.0 Apr 12 18:28:37.261357 ignition[935]: INFO : Stage: mount Apr 12 18:28:37.271588 ignition[935]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:37.271588 ignition[935]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:37.271588 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:37.339126 kernel: audit: type=1130 audit(1712946517.283:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.339150 kernel: audit: type=1130 audit(1712946517.319:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.339274 ignition[935]: INFO : mount: mount passed Apr 12 18:28:37.339274 ignition[935]: INFO : Ignition finished successfully Apr 12 18:28:37.279032 systemd[1]: Finished ignition-mount.service. Apr 12 18:28:37.314757 systemd[1]: Finished sysroot-boot.service. Apr 12 18:28:37.754816 coreos-metadata[866]: Apr 12 18:28:37.754 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 12 18:28:37.764051 coreos-metadata[866]: Apr 12 18:28:37.764 INFO Fetch successful Apr 12 18:28:37.797914 coreos-metadata[866]: Apr 12 18:28:37.797 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 12 18:28:37.822870 coreos-metadata[866]: Apr 12 18:28:37.822 INFO Fetch successful Apr 12 18:28:37.847711 coreos-metadata[866]: Apr 12 18:28:37.847 INFO wrote hostname ci-3510.3.3-a-58e6b5da18 to /sysroot/etc/hostname Apr 12 18:28:37.879569 kernel: audit: type=1130 audit(1712946517.861:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:37.856659 systemd[1]: Finished flatcar-metadata-hostname.service. Apr 12 18:28:37.862716 systemd[1]: Starting ignition-files.service... Apr 12 18:28:37.886251 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:28:37.909667 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (945) Apr 12 18:28:37.920600 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:37.920618 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:28:37.920629 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:28:37.931098 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:28:37.947842 ignition[964]: INFO : Ignition 2.14.0 Apr 12 18:28:37.947842 ignition[964]: INFO : Stage: files Apr 12 18:28:37.955951 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:37.955951 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:37.977800 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:37.977800 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:28:37.977800 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:28:37.977800 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:28:38.075344 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:28:38.082405 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:28:38.091525 unknown[964]: wrote ssh authorized keys file for user: core Apr 12 18:28:38.096658 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:28:38.096658 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:28:38.096658 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:38.443212 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:28:38.591911 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Apr 12 18:28:38.607093 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:28:38.607093 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:28:38.607093 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:38.796891 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:28:39.015734 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:28:39.026090 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:28:39.026090 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Apr 12 18:28:39.296089 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:28:39.566149 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Apr 12 18:28:39.581682 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:28:39.581682 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:28:39.581682 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Apr 12 18:28:39.783116 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 12 18:28:40.103200 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Apr 12 18:28:40.118411 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:28:40.118411 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:28:40.118411 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Apr 12 18:28:40.199971 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:28:40.482099 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Apr 12 18:28:40.497859 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:28:40.497859 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:28:40.497859 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Apr 12 18:28:40.535475 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:28:41.196309 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Apr 12 18:28:41.218583 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:28:41.218583 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:28:41.237301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:28:41.237301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:28:41.237301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:41.587444 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:28:41.656311 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:28:41.666594 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:28:41.816113 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:28:41.825539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:28:41.825539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Apr 12 18:28:41.825539 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:28:41.871591 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (969) Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3222225663" Apr 12 18:28:41.871624 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3222225663": device or resource busy Apr 12 18:28:41.871624 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3222225663", trying btrfs: device or resource busy Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3222225663" Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3222225663" Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3222225663" Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3222225663" Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:28:41.871624 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:28:42.059917 kernel: audit: type=1130 audit(1712946521.919:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.059942 kernel: audit: type=1130 audit(1712946522.003:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.059952 kernel: audit: type=1131 audit(1712946522.032:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:41.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.060087 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4167452211" Apr 12 18:28:42.060087 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4167452211": device or resource busy Apr 12 18:28:42.060087 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4167452211", trying btrfs: device or resource busy Apr 12 18:28:42.060087 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4167452211" Apr 12 18:28:42.060087 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4167452211" Apr 12 18:28:42.060087 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem4167452211" Apr 12 18:28:42.060087 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem4167452211" Apr 12 18:28:42.060087 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(18): [started] processing unit "waagent.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(18): [finished] processing unit "waagent.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(19): [started] processing unit "nvidia.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(19): [finished] processing unit "nvidia.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:28:42.060087 ignition[964]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Apr 12 18:28:41.896059 systemd[1]: Finished ignition-files.service. Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(22): [started] setting preset to enabled for "waagent.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(22): [finished] setting preset to enabled for "waagent.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:28:42.212324 ignition[964]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:28:42.212324 ignition[964]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:28:42.212324 ignition[964]: INFO : files: files passed Apr 12 18:28:42.212324 ignition[964]: INFO : Ignition finished successfully Apr 12 18:28:42.466823 kernel: audit: type=1130 audit(1712946522.249:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.466857 kernel: audit: type=1130 audit(1712946522.337:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.466870 kernel: audit: type=1131 audit(1712946522.358:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.466880 kernel: audit: type=1130 audit(1712946522.440:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:41.922510 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:28:42.472439 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:28:41.966249 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:28:41.967175 systemd[1]: Starting ignition-quench.service... Apr 12 18:28:41.989171 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:28:41.989294 systemd[1]: Finished ignition-quench.service. Apr 12 18:28:42.241416 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:28:42.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.274639 systemd[1]: Reached target ignition-complete.target. Apr 12 18:28:42.293209 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:28:42.329262 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:28:42.567803 kernel: audit: type=1131 audit(1712946522.525:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.329381 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:28:42.359613 systemd[1]: Reached target initrd-fs.target. Apr 12 18:28:42.386475 systemd[1]: Reached target initrd.target. Apr 12 18:28:42.397393 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:28:42.405405 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:28:42.428601 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:28:42.466275 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:28:42.484744 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:28:42.499760 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:28:42.508581 systemd[1]: Stopped target timers.target. Apr 12 18:28:42.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.517471 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:28:42.517539 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:28:42.547370 systemd[1]: Stopped target initrd.target. Apr 12 18:28:42.702892 kernel: audit: type=1131 audit(1712946522.648:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.702916 kernel: audit: type=1131 audit(1712946522.685:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.555331 systemd[1]: Stopped target basic.target. Apr 12 18:28:42.563046 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:28:42.572477 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:28:42.741792 kernel: audit: type=1131 audit(1712946522.712:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.581089 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:28:42.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.590465 systemd[1]: Stopped target remote-fs.target. Apr 12 18:28:42.784204 kernel: audit: type=1131 audit(1712946522.720:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.784229 kernel: audit: type=1131 audit(1712946522.746:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.784284 ignition[1002]: INFO : Ignition 2.14.0 Apr 12 18:28:42.784284 ignition[1002]: INFO : Stage: umount Apr 12 18:28:42.784284 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:42.784284 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:42.784284 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:42.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.598587 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:28:42.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.854688 iscsid[813]: iscsid shutting down. Apr 12 18:28:42.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.861862 ignition[1002]: INFO : umount: umount passed Apr 12 18:28:42.861862 ignition[1002]: INFO : Ignition finished successfully Apr 12 18:28:42.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.609470 systemd[1]: Stopped target sysinit.target. Apr 12 18:28:42.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.617231 systemd[1]: Stopped target local-fs.target. Apr 12 18:28:42.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.625272 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:28:42.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.633098 systemd[1]: Stopped target swap.target. Apr 12 18:28:42.640810 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:28:42.640878 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:28:42.668999 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:28:42.677713 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:28:42.677779 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:28:42.704582 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:28:42.704632 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:28:42.712966 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:28:42.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.713005 systemd[1]: Stopped ignition-files.service. Apr 12 18:28:42.720945 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 12 18:28:42.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.720986 systemd[1]: Stopped flatcar-metadata-hostname.service. Apr 12 18:28:42.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.750953 systemd[1]: Stopping ignition-mount.service... Apr 12 18:28:42.789386 systemd[1]: Stopping iscsid.service... Apr 12 18:28:42.797622 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:28:43.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.797694 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:28:42.806431 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:28:42.815406 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:28:42.815498 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:28:43.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.831782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:28:43.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:43.062000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:28:42.831831 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:28:42.836775 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:28:43.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.836887 systemd[1]: Stopped iscsid.service. Apr 12 18:28:43.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.850759 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:28:43.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.850851 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:28:42.858888 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:28:42.858970 systemd[1]: Stopped ignition-mount.service. Apr 12 18:28:43.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.866330 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:28:42.866381 systemd[1]: Stopped ignition-disks.service. Apr 12 18:28:42.874184 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:28:43.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.874228 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:28:43.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.883319 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 12 18:28:42.883363 systemd[1]: Stopped ignition-fetch.service. Apr 12 18:28:43.183292 kernel: hv_netvsc 002248bb-0422-0022-48bb-0422002248bb eth0: Data path switched from VF: enP34294s1 Apr 12 18:28:43.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.891157 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:28:42.891202 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:28:43.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.899163 systemd[1]: Stopped target paths.target. Apr 12 18:28:43.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:43.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.907282 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:28:42.919252 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:28:42.924055 systemd[1]: Stopped target slices.target. Apr 12 18:28:42.931990 systemd[1]: Stopped target sockets.target. Apr 12 18:28:43.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:42.945490 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:28:42.945541 systemd[1]: Closed iscsid.socket. Apr 12 18:28:42.954691 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:28:42.954736 systemd[1]: Stopped ignition-setup.service. Apr 12 18:28:42.964368 systemd[1]: Stopping iscsiuio.service... Apr 12 18:28:42.974222 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:28:42.974772 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:28:42.974878 systemd[1]: Stopped iscsiuio.service. Apr 12 18:28:42.981285 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:28:42.981370 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:28:42.992199 systemd[1]: Stopped target network.target. Apr 12 18:28:43.001754 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:28:43.292987 systemd-journald[236]: Received SIGTERM from PID 1 (n/a). Apr 12 18:28:43.001787 systemd[1]: Closed iscsiuio.socket. Apr 12 18:28:43.008909 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:28:43.008953 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:28:43.016583 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:28:43.025511 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:28:43.036637 systemd-networkd[804]: eth0: DHCPv6 lease lost Apr 12 18:28:43.292000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:28:43.042303 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:28:43.042415 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:28:43.048583 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:28:43.048684 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:28:43.057408 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:28:43.057460 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:28:43.067396 systemd[1]: Stopping network-cleanup.service... Apr 12 18:28:43.074664 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:28:43.074724 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:28:43.079355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:28:43.079397 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:28:43.090606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:28:43.090705 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:28:43.096234 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:28:43.105902 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:28:43.109462 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:28:43.109641 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:28:43.118267 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:28:43.118311 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:28:43.125960 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:28:43.125996 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:28:43.135038 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:28:43.135091 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:28:43.143853 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:28:43.143892 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:28:43.152601 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:28:43.152643 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:28:43.177492 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:28:43.188158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:28:43.188225 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:28:43.198341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:28:43.198465 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:28:43.221083 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:28:43.221203 systemd[1]: Stopped network-cleanup.service. Apr 12 18:28:43.228302 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:28:43.237085 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:28:43.252425 systemd[1]: Switching root. Apr 12 18:28:43.294177 systemd-journald[236]: Journal stopped Apr 12 18:28:57.660103 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:28:57.660123 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:28:57.660134 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:28:57.660143 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:28:57.660151 kernel: SELinux: policy capability open_perms=1 Apr 12 18:28:57.660159 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:28:57.660168 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:28:57.660177 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:28:57.660185 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:28:57.660193 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:28:57.660202 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:28:57.660211 systemd[1]: Successfully loaded SELinux policy in 311.401ms. Apr 12 18:28:57.660221 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.349ms. Apr 12 18:28:57.660232 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:28:57.660243 systemd[1]: Detected virtualization microsoft. Apr 12 18:28:57.660252 systemd[1]: Detected architecture arm64. Apr 12 18:28:57.660261 systemd[1]: Detected first boot. Apr 12 18:28:57.660270 systemd[1]: Hostname set to . Apr 12 18:28:57.660279 systemd[1]: Initializing machine ID from random generator. Apr 12 18:28:57.660288 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:28:57.660296 kernel: kauditd_printk_skb: 37 callbacks suppressed Apr 12 18:28:57.660306 kernel: audit: type=1400 audit(1712946528.706:88): avc: denied { associate } for pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:28:57.660317 kernel: audit: type=1300 audit(1712946528.706:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001053bc a1=4000028750 a2=4000026c40 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:28:57.660327 kernel: audit: type=1327 audit(1712946528.706:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:28:57.660337 kernel: audit: type=1400 audit(1712946528.719:89): avc: denied { associate } for pid=1035 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:28:57.660347 kernel: audit: type=1300 audit(1712946528.719:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000105499 a2=1ed a3=0 items=2 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:28:57.660356 kernel: audit: type=1307 audit(1712946528.719:89): cwd="/" Apr 12 18:28:57.660367 kernel: audit: type=1302 audit(1712946528.719:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:28:57.660376 kernel: audit: type=1302 audit(1712946528.719:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:28:57.660386 kernel: audit: type=1327 audit(1712946528.719:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:28:57.660395 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:28:57.660404 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:28:57.660413 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:28:57.660423 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:28:57.660434 kernel: audit: type=1334 audit(1712946536.834:90): prog-id=12 op=LOAD Apr 12 18:28:57.660443 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:28:57.660462 kernel: audit: type=1334 audit(1712946536.834:91): prog-id=3 op=UNLOAD Apr 12 18:28:57.660471 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:28:57.660481 kernel: audit: type=1334 audit(1712946536.834:92): prog-id=13 op=LOAD Apr 12 18:28:57.660489 kernel: audit: type=1334 audit(1712946536.834:93): prog-id=14 op=LOAD Apr 12 18:28:57.660501 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:28:57.660511 kernel: audit: type=1334 audit(1712946536.834:94): prog-id=4 op=UNLOAD Apr 12 18:28:57.660521 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:28:57.660530 kernel: audit: type=1334 audit(1712946536.834:95): prog-id=5 op=UNLOAD Apr 12 18:28:57.660539 kernel: audit: type=1131 audit(1712946536.837:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.660547 kernel: audit: type=1334 audit(1712946536.865:97): prog-id=12 op=UNLOAD Apr 12 18:28:57.660557 kernel: audit: type=1130 audit(1712946536.865:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.660566 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:28:57.660576 kernel: audit: type=1131 audit(1712946536.865:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.660587 systemd[1]: Created slice system-getty.slice. Apr 12 18:28:57.660597 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:28:57.660606 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:28:57.660616 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:28:57.660625 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:28:57.660634 systemd[1]: Created slice user.slice. Apr 12 18:28:57.660643 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:28:57.660652 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:28:57.660663 systemd[1]: Set up automount boot.automount. Apr 12 18:28:57.660672 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:28:57.660681 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:28:57.660690 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:28:57.660699 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:28:57.660708 systemd[1]: Reached target integritysetup.target. Apr 12 18:28:57.660718 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:28:57.660727 systemd[1]: Reached target remote-fs.target. Apr 12 18:28:57.660737 systemd[1]: Reached target slices.target. Apr 12 18:28:57.660747 systemd[1]: Reached target swap.target. Apr 12 18:28:57.660757 systemd[1]: Reached target torcx.target. Apr 12 18:28:57.660766 systemd[1]: Reached target veritysetup.target. Apr 12 18:28:57.660775 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:28:57.660784 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:28:57.660794 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:28:57.660805 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:28:57.660814 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:28:57.660823 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:28:57.660833 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:28:57.660842 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:28:57.660851 systemd[1]: Mounting media.mount... Apr 12 18:28:57.660863 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:28:57.660873 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:28:57.660882 systemd[1]: Mounting tmp.mount... Apr 12 18:28:57.660892 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:28:57.660901 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:28:57.660911 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:28:57.660920 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:28:57.660929 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:28:57.660938 systemd[1]: Starting modprobe@drm.service... Apr 12 18:28:57.660949 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:28:57.660959 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:28:57.660968 systemd[1]: Starting modprobe@loop.service... Apr 12 18:28:57.660978 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:28:57.660988 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:28:57.660997 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:28:57.661007 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:28:57.661016 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:28:57.661025 systemd[1]: Stopped systemd-journald.service. Apr 12 18:28:57.661036 kernel: loop: module loaded Apr 12 18:28:57.661044 kernel: fuse: init (API version 7.34) Apr 12 18:28:57.661053 systemd[1]: systemd-journald.service: Consumed 3.234s CPU time. Apr 12 18:28:57.661063 systemd[1]: Starting systemd-journald.service... Apr 12 18:28:57.661073 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:28:57.661082 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:28:57.661092 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:28:57.661101 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:28:57.661110 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:28:57.661121 systemd[1]: Stopped verity-setup.service. Apr 12 18:28:57.661131 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:28:57.661140 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:28:57.661149 systemd[1]: Mounted media.mount. Apr 12 18:28:57.661158 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:28:57.661168 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:28:57.661177 systemd[1]: Mounted tmp.mount. Apr 12 18:28:57.661187 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:28:57.661196 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:28:57.661211 systemd-journald[1142]: Journal started Apr 12 18:28:57.661248 systemd-journald[1142]: Runtime Journal (/run/log/journal/3973203c0cdf4d3eaad1ce385c132cf1) is 8.0M, max 78.6M, 70.6M free. Apr 12 18:28:46.327000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:28:47.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:28:47.248000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:28:47.248000 audit: BPF prog-id=10 op=LOAD Apr 12 18:28:47.248000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:28:47.248000 audit: BPF prog-id=11 op=LOAD Apr 12 18:28:47.248000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:28:48.706000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:28:48.706000 audit[1035]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001053bc a1=4000028750 a2=4000026c40 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:28:48.706000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:28:48.719000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:28:48.719000 audit[1035]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000105499 a2=1ed a3=0 items=2 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:28:48.719000 audit: CWD cwd="/" Apr 12 18:28:48.719000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:28:48.719000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:28:48.719000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:28:56.834000 audit: BPF prog-id=12 op=LOAD Apr 12 18:28:56.834000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:28:56.834000 audit: BPF prog-id=13 op=LOAD Apr 12 18:28:56.834000 audit: BPF prog-id=14 op=LOAD Apr 12 18:28:56.834000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:28:56.834000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:28:56.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.865000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:28:56.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.508000 audit: BPF prog-id=15 op=LOAD Apr 12 18:28:57.508000 audit: BPF prog-id=16 op=LOAD Apr 12 18:28:57.508000 audit: BPF prog-id=17 op=LOAD Apr 12 18:28:57.508000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:28:57.508000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:28:57.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.657000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:28:57.657000 audit[1142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc3b1fe50 a2=4000 a3=1 items=0 ppid=1 pid=1142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:28:57.657000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:28:48.608905 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:28:57.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.832829 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:28:48.634908 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:28:56.835288 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:28:48.634927 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:28:56.837711 systemd[1]: systemd-journald.service: Consumed 3.234s CPU time. Apr 12 18:28:48.634967 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:28:48.634977 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:28:48.635016 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:28:48.635029 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:28:48.635236 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:28:48.635266 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:28:48.635278 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:28:48.689330 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:28:48.689388 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:28:48.689410 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:28:48.689425 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:28:48.689446 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:28:48.689476 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:28:55.439290 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:28:55.439560 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:28:55.439653 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:28:55.439804 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:28:55.439854 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:28:55.439907 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-04-12T18:28:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:28:57.670864 systemd[1]: Started systemd-journald.service. Apr 12 18:28:57.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.671702 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:28:57.671838 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:28:57.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.676593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:28:57.676716 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:28:57.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.681531 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:28:57.681650 systemd[1]: Finished modprobe@drm.service. Apr 12 18:28:57.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.686287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:28:57.686406 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:28:57.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.691305 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:28:57.691423 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:28:57.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.695964 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:28:57.696082 systemd[1]: Finished modprobe@loop.service. Apr 12 18:28:57.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.700713 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:28:57.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.707013 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:28:57.712704 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:28:57.717076 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:28:57.735391 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:28:57.741063 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:28:57.745768 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:28:57.746966 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:28:57.751912 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:28:57.753105 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:28:57.760019 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:28:57.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.765419 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:28:57.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.770755 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:28:57.775960 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:28:57.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.781465 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:28:57.786897 systemd[1]: Reached target network-pre.target. Apr 12 18:28:57.792871 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:28:57.798109 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:28:57.806912 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 12 18:28:57.817133 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:28:57.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.822840 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:28:57.829938 systemd-journald[1142]: Time spent on flushing to /var/log/journal/3973203c0cdf4d3eaad1ce385c132cf1 is 14.026ms for 1135 entries. Apr 12 18:28:57.829938 systemd-journald[1142]: System Journal (/var/log/journal/3973203c0cdf4d3eaad1ce385c132cf1) is 8.0M, max 2.6G, 2.6G free. Apr 12 18:28:57.959459 systemd-journald[1142]: Received client request to flush runtime journal. Apr 12 18:28:57.960393 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:28:57.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.974390 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:28:57.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:58.590304 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:28:58.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:59.562505 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:28:59.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:59.567000 audit: BPF prog-id=18 op=LOAD Apr 12 18:28:59.567000 audit: BPF prog-id=19 op=LOAD Apr 12 18:28:59.567000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:28:59.567000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:28:59.568551 systemd[1]: Starting systemd-udevd.service... Apr 12 18:28:59.586956 systemd-udevd[1159]: Using default interface naming scheme 'v252'. Apr 12 18:29:00.047509 systemd[1]: Started systemd-udevd.service. Apr 12 18:29:00.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:00.058000 audit: BPF prog-id=20 op=LOAD Apr 12 18:29:00.059398 systemd[1]: Starting systemd-networkd.service... Apr 12 18:29:00.086721 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Apr 12 18:29:00.144000 audit[1169]: AVC avc: denied { confidentiality } for pid=1169 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:29:00.159586 kernel: hv_vmbus: registering driver hv_balloon Apr 12 18:29:00.164285 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 12 18:29:00.164310 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 12 18:29:00.144000 audit[1169]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaada82d910 a1=aa2c a2=ffffb76224b0 a3=aaaada78b010 items=12 ppid=1159 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:00.144000 audit: CWD cwd="/" Apr 12 18:29:00.144000 audit: PATH item=0 name=(null) inode=6675 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=1 name=(null) inode=9810 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=2 name=(null) inode=9810 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=3 name=(null) inode=9811 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=4 name=(null) inode=9810 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=5 name=(null) inode=9812 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=6 name=(null) inode=9810 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=7 name=(null) inode=9813 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=8 name=(null) inode=9810 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=9 name=(null) inode=9814 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=10 name=(null) inode=9810 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PATH item=11 name=(null) inode=9815 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:00.144000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:29:00.175489 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:29:00.200679 kernel: hv_utils: Registering HyperV Utility Driver Apr 12 18:29:00.200832 kernel: hv_vmbus: registering driver hv_utils Apr 12 18:29:00.201477 kernel: hv_utils: Heartbeat IC version 3.0 Apr 12 18:29:00.209584 kernel: hv_utils: Shutdown IC version 3.2 Apr 12 18:29:00.209800 kernel: hv_utils: TimeSync IC version 4.0 Apr 12 18:29:00.222740 kernel: hv_vmbus: registering driver hyperv_fb Apr 12 18:29:00.222000 audit: BPF prog-id=21 op=LOAD Apr 12 18:29:00.222000 audit: BPF prog-id=22 op=LOAD Apr 12 18:29:00.222000 audit: BPF prog-id=23 op=LOAD Apr 12 18:29:00.224624 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:29:00.239675 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 12 18:29:00.239815 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 12 18:29:00.244598 kernel: Console: switching to colour dummy device 80x25 Apr 12 18:29:00.247093 kernel: Console: switching to colour frame buffer device 128x48 Apr 12 18:29:00.345407 systemd[1]: Started systemd-userdbd.service. Apr 12 18:29:00.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:00.702157 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1171) Apr 12 18:29:00.721494 systemd-networkd[1180]: lo: Link UP Apr 12 18:29:00.721506 systemd-networkd[1180]: lo: Gained carrier Apr 12 18:29:00.721904 systemd-networkd[1180]: Enumeration completed Apr 12 18:29:00.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:00.721995 systemd[1]: Started systemd-networkd.service. Apr 12 18:29:00.727625 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:29:00.735220 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:29:00.742562 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:29:00.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:00.749031 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:29:00.799325 systemd-networkd[1180]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:29:00.848090 kernel: mlx5_core 85f6:00:02.0 enP34294s1: Link up Apr 12 18:29:00.874075 kernel: hv_netvsc 002248bb-0422-0022-48bb-0422002248bb eth0: Data path switched to VF: enP34294s1 Apr 12 18:29:00.875083 systemd-networkd[1180]: enP34294s1: Link UP Apr 12 18:29:00.875412 systemd-networkd[1180]: eth0: Link UP Apr 12 18:29:00.875420 systemd-networkd[1180]: eth0: Gained carrier Apr 12 18:29:00.883531 systemd-networkd[1180]: enP34294s1: Gained carrier Apr 12 18:29:00.891200 systemd-networkd[1180]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:29:01.174452 lvm[1237]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:29:01.220092 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:29:01.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.225155 systemd[1]: Reached target cryptsetup.target. Apr 12 18:29:01.230952 systemd[1]: Starting lvm2-activation.service... Apr 12 18:29:01.235052 lvm[1238]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:29:01.254977 systemd[1]: Finished lvm2-activation.service. Apr 12 18:29:01.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.259612 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:29:01.264244 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:29:01.264273 systemd[1]: Reached target local-fs.target. Apr 12 18:29:01.268624 systemd[1]: Reached target machines.target. Apr 12 18:29:01.274018 systemd[1]: Starting ldconfig.service... Apr 12 18:29:01.292072 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:29:01.292152 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:01.293405 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:29:01.298606 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:29:01.305009 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:29:01.309713 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:29:01.309771 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:29:01.311004 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:29:01.352000 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1240 (bootctl) Apr 12 18:29:01.353169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:29:01.795917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:29:01.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.851227 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:29:02.316270 systemd-networkd[1180]: eth0: Gained IPv6LL Apr 12 18:29:02.321992 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:29:02.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.331593 kernel: kauditd_printk_skb: 68 callbacks suppressed Apr 12 18:29:02.331665 kernel: audit: type=1130 audit(1712946542.326:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.499260 systemd-fsck[1248]: fsck.fat 4.2 (2021-01-31) Apr 12 18:29:02.499260 systemd-fsck[1248]: /dev/sda1: 236 files, 117047/258078 clusters Apr 12 18:29:02.501432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:29:02.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.509466 systemd[1]: Mounting boot.mount... Apr 12 18:29:02.527995 kernel: audit: type=1130 audit(1712946542.506:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.673220 systemd[1]: Mounted boot.mount. Apr 12 18:29:02.681341 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:29:02.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.703090 kernel: audit: type=1130 audit(1712946542.684:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.780056 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:29:03.784087 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:29:04.720988 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:29:04.721898 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:29:04.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:04.744099 kernel: audit: type=1130 audit(1712946544.726:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.548299 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:29:06.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.570276 kernel: audit: type=1130 audit(1712946546.552:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.570390 systemd[1]: Starting audit-rules.service... Apr 12 18:29:06.575310 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:29:06.580710 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:29:06.585000 audit: BPF prog-id=24 op=LOAD Apr 12 18:29:06.587604 systemd[1]: Starting systemd-resolved.service... Apr 12 18:29:06.596147 kernel: audit: type=1334 audit(1712946546.585:156): prog-id=24 op=LOAD Apr 12 18:29:06.595000 audit: BPF prog-id=25 op=LOAD Apr 12 18:29:06.598120 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:29:06.606142 kernel: audit: type=1334 audit(1712946546.595:157): prog-id=25 op=LOAD Apr 12 18:29:06.607919 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:29:06.899201 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:29:06.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.905258 systemd[1]: Reached target time-set.target. Apr 12 18:29:06.923824 kernel: audit: type=1130 audit(1712946546.902:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.939000 audit[1260]: SYSTEM_BOOT pid=1260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.960094 kernel: audit: type=1127 audit(1712946546.939:159): pid=1260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.960313 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:29:06.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.983077 kernel: audit: type=1130 audit(1712946546.963:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:07.082555 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:29:07.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:07.087341 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:29:07.490161 systemd-resolved[1257]: Positive Trust Anchors: Apr 12 18:29:07.490473 systemd-resolved[1257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:29:07.490553 systemd-resolved[1257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:29:07.494607 systemd-resolved[1257]: Using system hostname 'ci-3510.3.3-a-58e6b5da18'. Apr 12 18:29:07.496081 systemd[1]: Started systemd-resolved.service. Apr 12 18:29:07.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:07.500586 systemd[1]: Reached target network.target. Apr 12 18:29:07.504877 kernel: kauditd_printk_skb: 1 callbacks suppressed Apr 12 18:29:07.504938 kernel: audit: type=1130 audit(1712946547.499:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:07.526795 systemd[1]: Reached target network-online.target. Apr 12 18:29:07.532384 systemd[1]: Reached target nss-lookup.target. Apr 12 18:29:07.621420 systemd-timesyncd[1259]: Contacted time server 104.152.220.10:123 (0.flatcar.pool.ntp.org). Apr 12 18:29:07.621500 systemd-timesyncd[1259]: Initial clock synchronization to Fri 2024-04-12 18:29:07.620773 UTC. Apr 12 18:29:07.714701 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:29:07.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:07.740147 kernel: audit: type=1130 audit(1712946547.719:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.827000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:29:08.827000 audit[1275]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1a4a8b0 a2=420 a3=0 items=0 ppid=1254 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:08.865568 kernel: audit: type=1305 audit(1712946548.827:164): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:29:08.865710 kernel: audit: type=1300 audit(1712946548.827:164): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1a4a8b0 a2=420 a3=0 items=0 ppid=1254 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:08.865737 kernel: audit: type=1327 audit(1712946548.827:164): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:29:08.827000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:29:08.878376 augenrules[1275]: No rules Apr 12 18:29:08.879339 systemd[1]: Finished audit-rules.service. Apr 12 18:29:19.879094 ldconfig[1239]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:29:19.890838 systemd[1]: Finished ldconfig.service. Apr 12 18:29:19.896598 systemd[1]: Starting systemd-update-done.service... Apr 12 18:29:19.927424 systemd[1]: Finished systemd-update-done.service. Apr 12 18:29:19.933158 systemd[1]: Reached target sysinit.target. Apr 12 18:29:19.938394 systemd[1]: Started motdgen.path. Apr 12 18:29:19.942230 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:29:19.949640 systemd[1]: Started logrotate.timer. Apr 12 18:29:19.953696 systemd[1]: Started mdadm.timer. Apr 12 18:29:19.957232 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:29:19.961838 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:29:19.961866 systemd[1]: Reached target paths.target. Apr 12 18:29:19.966853 systemd[1]: Reached target timers.target. Apr 12 18:29:19.971706 systemd[1]: Listening on dbus.socket. Apr 12 18:29:19.976633 systemd[1]: Starting docker.socket... Apr 12 18:29:19.996716 systemd[1]: Listening on sshd.socket. Apr 12 18:29:20.000899 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:20.001399 systemd[1]: Listening on docker.socket. Apr 12 18:29:20.005522 systemd[1]: Reached target sockets.target. Apr 12 18:29:20.009662 systemd[1]: Reached target basic.target. Apr 12 18:29:20.014041 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:29:20.014084 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:29:20.015189 systemd[1]: Starting containerd.service... Apr 12 18:29:20.019716 systemd[1]: Starting dbus.service... Apr 12 18:29:20.023978 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:29:20.029256 systemd[1]: Starting extend-filesystems.service... Apr 12 18:29:20.035961 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:29:20.037081 systemd[1]: Starting motdgen.service... Apr 12 18:29:20.041471 systemd[1]: Started nvidia.service. Apr 12 18:29:20.047176 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:29:20.052684 systemd[1]: Starting prepare-critools.service... Apr 12 18:29:20.058410 systemd[1]: Starting prepare-helm.service... Apr 12 18:29:20.064395 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:29:20.070140 systemd[1]: Starting sshd-keygen.service... Apr 12 18:29:20.076015 systemd[1]: Starting systemd-logind.service... Apr 12 18:29:20.080302 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:20.080365 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:29:20.080804 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:29:20.081508 systemd[1]: Starting update-engine.service... Apr 12 18:29:20.086631 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:29:20.096750 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:29:20.096928 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:29:20.122708 extend-filesystems[1286]: Found sda Apr 12 18:29:20.127413 extend-filesystems[1286]: Found sda1 Apr 12 18:29:20.127413 extend-filesystems[1286]: Found sda2 Apr 12 18:29:20.127413 extend-filesystems[1286]: Found sda3 Apr 12 18:29:20.127413 extend-filesystems[1286]: Found usr Apr 12 18:29:20.127413 extend-filesystems[1286]: Found sda4 Apr 12 18:29:20.127413 extend-filesystems[1286]: Found sda6 Apr 12 18:29:20.127413 extend-filesystems[1286]: Found sda7 Apr 12 18:29:20.127413 extend-filesystems[1286]: Found sda9 Apr 12 18:29:20.127413 extend-filesystems[1286]: Checking size of /dev/sda9 Apr 12 18:29:20.153950 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:29:20.176351 jq[1285]: false Apr 12 18:29:20.176513 jq[1304]: true Apr 12 18:29:20.154188 systemd[1]: Finished motdgen.service. Apr 12 18:29:20.164341 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:29:20.164551 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:29:20.203906 jq[1316]: true Apr 12 18:29:20.222831 tar[1306]: ./ Apr 12 18:29:20.222831 tar[1306]: ./loopback Apr 12 18:29:20.225330 tar[1307]: crictl Apr 12 18:29:20.225574 tar[1308]: linux-arm64/helm Apr 12 18:29:20.240626 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:29:20.242559 systemd-logind[1301]: New seat seat0. Apr 12 18:29:20.243899 extend-filesystems[1286]: Old size kept for /dev/sda9 Apr 12 18:29:20.257728 extend-filesystems[1286]: Found sr0 Apr 12 18:29:20.249287 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:29:20.271937 env[1311]: time="2024-04-12T18:29:20.267339824Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:29:20.249478 systemd[1]: Finished extend-filesystems.service. Apr 12 18:29:20.331381 tar[1306]: ./bandwidth Apr 12 18:29:20.352269 env[1311]: time="2024-04-12T18:29:20.352206201Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:29:20.363113 env[1311]: time="2024-04-12T18:29:20.362905322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:20.371208 env[1311]: time="2024-04-12T18:29:20.371158719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:20.371349 env[1311]: time="2024-04-12T18:29:20.371332276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:20.374633 env[1311]: time="2024-04-12T18:29:20.374566548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:20.376435 env[1311]: time="2024-04-12T18:29:20.374723546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:20.376435 env[1311]: time="2024-04-12T18:29:20.374746186Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:29:20.376435 env[1311]: time="2024-04-12T18:29:20.374756345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:20.378184 bash[1344]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:29:20.379088 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:29:20.389168 env[1311]: time="2024-04-12T18:29:20.389118252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:20.392096 env[1311]: time="2024-04-12T18:29:20.392032528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:20.393820 env[1311]: time="2024-04-12T18:29:20.393782502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:20.394319 env[1311]: time="2024-04-12T18:29:20.394294335Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:29:20.395008 env[1311]: time="2024-04-12T18:29:20.394982524Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:29:20.395784 env[1311]: time="2024-04-12T18:29:20.395759753Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:29:20.422087 env[1311]: time="2024-04-12T18:29:20.421735886Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:29:20.422087 env[1311]: time="2024-04-12T18:29:20.421798165Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:29:20.422087 env[1311]: time="2024-04-12T18:29:20.421813045Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:29:20.422087 env[1311]: time="2024-04-12T18:29:20.421937843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.422087 env[1311]: time="2024-04-12T18:29:20.421962363Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.422087 env[1311]: time="2024-04-12T18:29:20.421976923Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.422087 env[1311]: time="2024-04-12T18:29:20.421990402Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.422642513Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.422676152Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.422691312Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.422704192Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.422717751Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.422859229Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.422934148Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.423171425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.423197224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.423215424Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.423261863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.423275103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.423287783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424096 env[1311]: time="2024-04-12T18:29:20.423298343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423309343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423323262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423334542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423346582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423359582Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423484220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423504340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423518140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423531419Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423547139Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423559699Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423576699Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:29:20.424436 env[1311]: time="2024-04-12T18:29:20.423611898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:29:20.424682 env[1311]: time="2024-04-12T18:29:20.423804895Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:29:20.424682 env[1311]: time="2024-04-12T18:29:20.423855935Z" level=info msg="Connect containerd service" Apr 12 18:29:20.424682 env[1311]: time="2024-04-12T18:29:20.423891174Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:29:20.425750 systemd[1]: Started containerd.service. Apr 12 18:29:20.447544 tar[1306]: ./ptp Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.425288353Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.425536390Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.425573029Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.425627428Z" level=info msg="containerd successfully booted in 0.186456s" Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.437723928Z" level=info msg="Start subscribing containerd event" Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.437804407Z" level=info msg="Start recovering state" Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.437886406Z" level=info msg="Start event monitor" Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.437913005Z" level=info msg="Start snapshots syncer" Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.437927085Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:29:20.447585 env[1311]: time="2024-04-12T18:29:20.437939325Z" level=info msg="Start streaming server" Apr 12 18:29:20.508989 systemd[1]: nvidia.service: Deactivated successfully. Apr 12 18:29:20.550257 tar[1306]: ./vlan Apr 12 18:29:20.619909 tar[1306]: ./host-device Apr 12 18:29:20.683608 dbus-daemon[1284]: [system] SELinux support is enabled Apr 12 18:29:20.683780 systemd[1]: Started dbus.service. Apr 12 18:29:20.689569 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:29:20.689604 systemd[1]: Reached target system-config.target. Apr 12 18:29:20.696909 tar[1306]: ./tuning Apr 12 18:29:20.698447 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:29:20.698475 systemd[1]: Reached target user-config.target. Apr 12 18:29:20.705844 systemd[1]: Started systemd-logind.service. Apr 12 18:29:20.705889 dbus-daemon[1284]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 12 18:29:20.758272 tar[1306]: ./vrf Apr 12 18:29:20.812216 tar[1306]: ./sbr Apr 12 18:29:20.867711 tar[1306]: ./tap Apr 12 18:29:20.944512 tar[1306]: ./dhcp Apr 12 18:29:21.104997 update_engine[1303]: I0412 18:29:21.089251 1303 main.cc:92] Flatcar Update Engine starting Apr 12 18:29:21.119180 tar[1306]: ./static Apr 12 18:29:21.167200 systemd[1]: Finished prepare-critools.service. Apr 12 18:29:21.175392 tar[1306]: ./firewall Apr 12 18:29:21.177123 tar[1308]: linux-arm64/LICENSE Apr 12 18:29:21.177197 tar[1308]: linux-arm64/README.md Apr 12 18:29:21.182560 systemd[1]: Finished prepare-helm.service. Apr 12 18:29:21.190847 systemd[1]: Started update-engine.service. Apr 12 18:29:21.191153 update_engine[1303]: I0412 18:29:21.190898 1303 update_check_scheduler.cc:74] Next update check in 6m13s Apr 12 18:29:21.199398 systemd[1]: Started locksmithd.service. Apr 12 18:29:21.234744 tar[1306]: ./macvlan Apr 12 18:29:21.268498 tar[1306]: ./dummy Apr 12 18:29:21.301682 tar[1306]: ./bridge Apr 12 18:29:21.337831 tar[1306]: ./ipvlan Apr 12 18:29:21.370850 tar[1306]: ./portmap Apr 12 18:29:21.402477 tar[1306]: ./host-local Apr 12 18:29:21.475914 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:29:22.017422 sshd_keygen[1302]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:29:22.034801 systemd[1]: Finished sshd-keygen.service. Apr 12 18:29:22.040696 systemd[1]: Starting issuegen.service... Apr 12 18:29:22.045433 systemd[1]: Started waagent.service. Apr 12 18:29:22.051407 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:29:22.051575 systemd[1]: Finished issuegen.service. Apr 12 18:29:22.058467 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:29:22.086951 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:29:22.094675 systemd[1]: Started getty@tty1.service. Apr 12 18:29:22.100597 systemd[1]: Started serial-getty@ttyAMA0.service. Apr 12 18:29:22.105666 systemd[1]: Reached target getty.target. Apr 12 18:29:22.109856 systemd[1]: Reached target multi-user.target. Apr 12 18:29:22.115737 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:29:22.128261 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:29:22.128423 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:29:22.133986 systemd[1]: Startup finished in 750ms (kernel) + 18.134s (initrd) + 36.356s (userspace) = 55.240s. Apr 12 18:29:22.988767 login[1412]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Apr 12 18:29:23.009229 login[1411]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 18:29:23.050633 systemd[1]: Created slice user-500.slice. Apr 12 18:29:23.051756 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:29:23.054425 systemd-logind[1301]: New session 2 of user core. Apr 12 18:29:23.118160 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:29:23.119643 systemd[1]: Starting user@500.service... Apr 12 18:29:23.181180 (systemd)[1415]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:23.470982 systemd[1415]: Queued start job for default target default.target. Apr 12 18:29:23.471562 systemd[1415]: Reached target paths.target. Apr 12 18:29:23.471581 systemd[1415]: Reached target sockets.target. Apr 12 18:29:23.471593 systemd[1415]: Reached target timers.target. Apr 12 18:29:23.471607 systemd[1415]: Reached target basic.target. Apr 12 18:29:23.471715 systemd[1]: Started user@500.service. Apr 12 18:29:23.472589 systemd[1]: Started session-2.scope. Apr 12 18:29:23.473016 systemd[1415]: Reached target default.target. Apr 12 18:29:23.473199 systemd[1415]: Startup finished in 286ms. Apr 12 18:29:23.881988 locksmithd[1392]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:29:23.990179 login[1412]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 18:29:23.994116 systemd-logind[1301]: New session 1 of user core. Apr 12 18:29:23.994552 systemd[1]: Started session-1.scope. Apr 12 18:29:29.392404 waagent[1408]: 2024-04-12T18:29:29.392300Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Apr 12 18:29:29.399178 waagent[1408]: 2024-04-12T18:29:29.399091Z INFO Daemon Daemon OS: flatcar 3510.3.3 Apr 12 18:29:29.403675 waagent[1408]: 2024-04-12T18:29:29.403608Z INFO Daemon Daemon Python: 3.9.16 Apr 12 18:29:29.408323 waagent[1408]: 2024-04-12T18:29:29.408237Z INFO Daemon Daemon Run daemon Apr 12 18:29:29.412670 waagent[1408]: 2024-04-12T18:29:29.412610Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.3' Apr 12 18:29:29.429311 waagent[1408]: 2024-04-12T18:29:29.429183Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Apr 12 18:29:29.444487 waagent[1408]: 2024-04-12T18:29:29.444362Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 12 18:29:29.454469 waagent[1408]: 2024-04-12T18:29:29.454379Z INFO Daemon Daemon cloud-init is enabled: False Apr 12 18:29:29.459559 waagent[1408]: 2024-04-12T18:29:29.459481Z INFO Daemon Daemon Using waagent for provisioning Apr 12 18:29:29.465414 waagent[1408]: 2024-04-12T18:29:29.465346Z INFO Daemon Daemon Activate resource disk Apr 12 18:29:29.470360 waagent[1408]: 2024-04-12T18:29:29.470294Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 12 18:29:29.485336 waagent[1408]: 2024-04-12T18:29:29.485252Z INFO Daemon Daemon Found device: None Apr 12 18:29:29.490101 waagent[1408]: 2024-04-12T18:29:29.490001Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 12 18:29:29.498502 waagent[1408]: 2024-04-12T18:29:29.498424Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 12 18:29:29.510374 waagent[1408]: 2024-04-12T18:29:29.510309Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 12 18:29:29.516395 waagent[1408]: 2024-04-12T18:29:29.516325Z INFO Daemon Daemon Running default provisioning handler Apr 12 18:29:29.529460 waagent[1408]: 2024-04-12T18:29:29.529329Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Apr 12 18:29:29.544208 waagent[1408]: 2024-04-12T18:29:29.544083Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 12 18:29:29.554256 waagent[1408]: 2024-04-12T18:29:29.554170Z INFO Daemon Daemon cloud-init is enabled: False Apr 12 18:29:29.559316 waagent[1408]: 2024-04-12T18:29:29.559231Z INFO Daemon Daemon Copying ovf-env.xml Apr 12 18:29:29.639934 waagent[1408]: 2024-04-12T18:29:29.639797Z INFO Daemon Daemon Successfully mounted dvd Apr 12 18:29:29.785953 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 12 18:29:29.860004 waagent[1408]: 2024-04-12T18:29:29.859866Z INFO Daemon Daemon Detect protocol endpoint Apr 12 18:29:29.865408 waagent[1408]: 2024-04-12T18:29:29.865313Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 12 18:29:29.871292 waagent[1408]: 2024-04-12T18:29:29.871205Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 12 18:29:29.877980 waagent[1408]: 2024-04-12T18:29:29.877894Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 12 18:29:29.883382 waagent[1408]: 2024-04-12T18:29:29.883307Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 12 18:29:29.888470 waagent[1408]: 2024-04-12T18:29:29.888395Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 12 18:29:30.043398 waagent[1408]: 2024-04-12T18:29:30.043278Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 12 18:29:30.050366 waagent[1408]: 2024-04-12T18:29:30.050321Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 12 18:29:30.056738 waagent[1408]: 2024-04-12T18:29:30.056657Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 12 18:29:31.001281 waagent[1408]: 2024-04-12T18:29:31.001140Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 12 18:29:31.016204 waagent[1408]: 2024-04-12T18:29:31.016125Z INFO Daemon Daemon Forcing an update of the goal state.. Apr 12 18:29:31.021928 waagent[1408]: 2024-04-12T18:29:31.021855Z INFO Daemon Daemon Fetching goal state [incarnation 1] Apr 12 18:29:31.102354 waagent[1408]: 2024-04-12T18:29:31.102195Z INFO Daemon Daemon Found private key matching thumbprint 6249EF38A94AD18564DD44227B5BF6C3A6BF8395 Apr 12 18:29:31.110969 waagent[1408]: 2024-04-12T18:29:31.110863Z INFO Daemon Daemon Certificate with thumbprint 5881BD925595713C08D2783F16E858A2233096FB has no matching private key. Apr 12 18:29:31.120499 waagent[1408]: 2024-04-12T18:29:31.120394Z INFO Daemon Daemon Fetch goal state completed Apr 12 18:29:31.162632 waagent[1408]: 2024-04-12T18:29:31.162570Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: a86a2f1f-6600-4e17-92dd-e0739eb6d438 New eTag: 4159128171308771628] Apr 12 18:29:31.173738 waagent[1408]: 2024-04-12T18:29:31.173639Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Apr 12 18:29:31.190000 waagent[1408]: 2024-04-12T18:29:31.189913Z INFO Daemon Daemon Starting provisioning Apr 12 18:29:31.195089 waagent[1408]: 2024-04-12T18:29:31.194978Z INFO Daemon Daemon Handle ovf-env.xml. Apr 12 18:29:31.199837 waagent[1408]: 2024-04-12T18:29:31.199745Z INFO Daemon Daemon Set hostname [ci-3510.3.3-a-58e6b5da18] Apr 12 18:29:31.487397 waagent[1408]: 2024-04-12T18:29:31.487265Z INFO Daemon Daemon Publish hostname [ci-3510.3.3-a-58e6b5da18] Apr 12 18:29:31.494073 waagent[1408]: 2024-04-12T18:29:31.493984Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 12 18:29:31.500433 waagent[1408]: 2024-04-12T18:29:31.500360Z INFO Daemon Daemon Primary interface is [eth0] Apr 12 18:29:31.516391 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Apr 12 18:29:31.516555 systemd[1]: Stopped systemd-networkd-wait-online.service. Apr 12 18:29:31.516608 systemd[1]: Stopping systemd-networkd-wait-online.service... Apr 12 18:29:31.516835 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:29:31.523106 systemd-networkd[1180]: eth0: DHCPv6 lease lost Apr 12 18:29:31.524464 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:29:31.524637 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:29:31.526559 systemd[1]: Starting systemd-networkd.service... Apr 12 18:29:31.553233 systemd-networkd[1460]: enP34294s1: Link UP Apr 12 18:29:31.553242 systemd-networkd[1460]: enP34294s1: Gained carrier Apr 12 18:29:31.554117 systemd-networkd[1460]: eth0: Link UP Apr 12 18:29:31.554127 systemd-networkd[1460]: eth0: Gained carrier Apr 12 18:29:31.554433 systemd-networkd[1460]: lo: Link UP Apr 12 18:29:31.554442 systemd-networkd[1460]: lo: Gained carrier Apr 12 18:29:31.554668 systemd-networkd[1460]: eth0: Gained IPv6LL Apr 12 18:29:31.555886 systemd-networkd[1460]: Enumeration completed Apr 12 18:29:31.555997 systemd[1]: Started systemd-networkd.service. Apr 12 18:29:31.557496 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:29:31.557720 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:29:31.561420 waagent[1408]: 2024-04-12T18:29:31.561274Z INFO Daemon Daemon Create user account if not exists Apr 12 18:29:31.567297 waagent[1408]: 2024-04-12T18:29:31.567219Z INFO Daemon Daemon User core already exists, skip useradd Apr 12 18:29:31.573269 waagent[1408]: 2024-04-12T18:29:31.573185Z INFO Daemon Daemon Configure sudoer Apr 12 18:29:31.578378 waagent[1408]: 2024-04-12T18:29:31.578302Z INFO Daemon Daemon Configure sshd Apr 12 18:29:31.582616 waagent[1408]: 2024-04-12T18:29:31.582536Z INFO Daemon Daemon Deploy ssh public key. Apr 12 18:29:31.583138 systemd-networkd[1460]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:29:31.593911 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:29:32.808541 waagent[1408]: 2024-04-12T18:29:32.808450Z INFO Daemon Daemon Provisioning complete Apr 12 18:29:32.830198 waagent[1408]: 2024-04-12T18:29:32.830135Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 12 18:29:32.836520 waagent[1408]: 2024-04-12T18:29:32.836442Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 12 18:29:32.847511 waagent[1408]: 2024-04-12T18:29:32.847434Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Apr 12 18:29:33.150562 waagent[1469]: 2024-04-12T18:29:33.150470Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Apr 12 18:29:33.151659 waagent[1469]: 2024-04-12T18:29:33.151602Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:33.151895 waagent[1469]: 2024-04-12T18:29:33.151850Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:33.164571 waagent[1469]: 2024-04-12T18:29:33.164486Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Apr 12 18:29:33.164905 waagent[1469]: 2024-04-12T18:29:33.164858Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Apr 12 18:29:33.235903 waagent[1469]: 2024-04-12T18:29:33.235774Z INFO ExtHandler ExtHandler Found private key matching thumbprint 6249EF38A94AD18564DD44227B5BF6C3A6BF8395 Apr 12 18:29:33.236306 waagent[1469]: 2024-04-12T18:29:33.236254Z INFO ExtHandler ExtHandler Certificate with thumbprint 5881BD925595713C08D2783F16E858A2233096FB has no matching private key. Apr 12 18:29:33.236628 waagent[1469]: 2024-04-12T18:29:33.236579Z INFO ExtHandler ExtHandler Fetch goal state completed Apr 12 18:29:33.254138 waagent[1469]: 2024-04-12T18:29:33.254080Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 87fe236d-a822-42be-8a4b-0d29d8f746b8 New eTag: 4159128171308771628] Apr 12 18:29:33.254871 waagent[1469]: 2024-04-12T18:29:33.254816Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Apr 12 18:29:33.355648 waagent[1469]: 2024-04-12T18:29:33.355511Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.3; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 12 18:29:33.390299 waagent[1469]: 2024-04-12T18:29:33.390210Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1469 Apr 12 18:29:33.394227 waagent[1469]: 2024-04-12T18:29:33.394156Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 12 18:29:33.395707 waagent[1469]: 2024-04-12T18:29:33.395649Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 12 18:29:33.551314 waagent[1469]: 2024-04-12T18:29:33.551204Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 12 18:29:33.551848 waagent[1469]: 2024-04-12T18:29:33.551795Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 12 18:29:33.559794 waagent[1469]: 2024-04-12T18:29:33.559742Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 12 18:29:33.560487 waagent[1469]: 2024-04-12T18:29:33.560433Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Apr 12 18:29:33.561747 waagent[1469]: 2024-04-12T18:29:33.561686Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Apr 12 18:29:33.563245 waagent[1469]: 2024-04-12T18:29:33.563178Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 12 18:29:33.563480 waagent[1469]: 2024-04-12T18:29:33.563414Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:33.563777 waagent[1469]: 2024-04-12T18:29:33.563717Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:33.564756 waagent[1469]: 2024-04-12T18:29:33.564681Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 12 18:29:33.565096 waagent[1469]: 2024-04-12T18:29:33.565021Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 12 18:29:33.565096 waagent[1469]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 12 18:29:33.565096 waagent[1469]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 12 18:29:33.565096 waagent[1469]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 12 18:29:33.565096 waagent[1469]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:33.565096 waagent[1469]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:33.565096 waagent[1469]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:33.567292 waagent[1469]: 2024-04-12T18:29:33.567131Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 12 18:29:33.567825 waagent[1469]: 2024-04-12T18:29:33.567747Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:33.568405 waagent[1469]: 2024-04-12T18:29:33.568342Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:33.568970 waagent[1469]: 2024-04-12T18:29:33.568898Z INFO EnvHandler ExtHandler Configure routes Apr 12 18:29:33.569146 waagent[1469]: 2024-04-12T18:29:33.569091Z INFO EnvHandler ExtHandler Gateway:None Apr 12 18:29:33.569355 waagent[1469]: 2024-04-12T18:29:33.569289Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 12 18:29:33.569523 waagent[1469]: 2024-04-12T18:29:33.569459Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 12 18:29:33.569600 waagent[1469]: 2024-04-12T18:29:33.569550Z INFO EnvHandler ExtHandler Routes:None Apr 12 18:29:33.570981 waagent[1469]: 2024-04-12T18:29:33.570865Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 12 18:29:33.571174 waagent[1469]: 2024-04-12T18:29:33.571104Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 12 18:29:33.571893 waagent[1469]: 2024-04-12T18:29:33.571819Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 12 18:29:33.581893 waagent[1469]: 2024-04-12T18:29:33.581818Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Apr 12 18:29:33.584195 waagent[1469]: 2024-04-12T18:29:33.584128Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Apr 12 18:29:33.586135 waagent[1469]: 2024-04-12T18:29:33.586047Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Apr 12 18:29:33.612366 waagent[1469]: 2024-04-12T18:29:33.612242Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1460' Apr 12 18:29:33.629239 waagent[1469]: 2024-04-12T18:29:33.629175Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Apr 12 18:29:33.720274 waagent[1469]: 2024-04-12T18:29:33.720131Z INFO MonitorHandler ExtHandler Network interfaces: Apr 12 18:29:33.720274 waagent[1469]: Executing ['ip', '-a', '-o', 'link']: Apr 12 18:29:33.720274 waagent[1469]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 12 18:29:33.720274 waagent[1469]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:04:22 brd ff:ff:ff:ff:ff:ff Apr 12 18:29:33.720274 waagent[1469]: 3: enP34294s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:04:22 brd ff:ff:ff:ff:ff:ff\ altname enP34294p0s2 Apr 12 18:29:33.720274 waagent[1469]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 12 18:29:33.720274 waagent[1469]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 12 18:29:33.720274 waagent[1469]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 12 18:29:33.720274 waagent[1469]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 12 18:29:33.720274 waagent[1469]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Apr 12 18:29:33.720274 waagent[1469]: 2: eth0 inet6 fe80::222:48ff:febb:422/64 scope link \ valid_lft forever preferred_lft forever Apr 12 18:29:33.867128 waagent[1469]: 2024-04-12T18:29:33.867048Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.10.0.8 -- exiting Apr 12 18:29:34.852013 waagent[1408]: 2024-04-12T18:29:34.851890Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Apr 12 18:29:34.855764 waagent[1408]: 2024-04-12T18:29:34.855703Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.10.0.8 to be the latest agent Apr 12 18:29:36.045215 waagent[1500]: 2024-04-12T18:29:36.045120Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.10.0.8) Apr 12 18:29:36.046262 waagent[1500]: 2024-04-12T18:29:36.046202Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.3 Apr 12 18:29:36.046499 waagent[1500]: 2024-04-12T18:29:36.046453Z INFO ExtHandler ExtHandler Python: 3.9.16 Apr 12 18:29:36.046707 waagent[1500]: 2024-04-12T18:29:36.046663Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Apr 12 18:29:36.055018 waagent[1500]: 2024-04-12T18:29:36.054890Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.3; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 12 18:29:36.055624 waagent[1500]: 2024-04-12T18:29:36.055568Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:36.055858 waagent[1500]: 2024-04-12T18:29:36.055813Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:36.069274 waagent[1500]: 2024-04-12T18:29:36.069187Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 12 18:29:36.081572 waagent[1500]: 2024-04-12T18:29:36.081508Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.149 Apr 12 18:29:36.082804 waagent[1500]: 2024-04-12T18:29:36.082748Z INFO ExtHandler Apr 12 18:29:36.083048 waagent[1500]: 2024-04-12T18:29:36.082999Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 009c9e1c-eec0-4814-a3e5-05a584e6f431 eTag: 4159128171308771628 source: Fabric] Apr 12 18:29:36.083894 waagent[1500]: 2024-04-12T18:29:36.083841Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 12 18:29:36.085211 waagent[1500]: 2024-04-12T18:29:36.085154Z INFO ExtHandler Apr 12 18:29:36.085421 waagent[1500]: 2024-04-12T18:29:36.085375Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 12 18:29:36.094591 waagent[1500]: 2024-04-12T18:29:36.094535Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 12 18:29:36.095274 waagent[1500]: 2024-04-12T18:29:36.095226Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Apr 12 18:29:36.113974 waagent[1500]: 2024-04-12T18:29:36.113909Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Apr 12 18:29:36.186154 waagent[1500]: 2024-04-12T18:29:36.185992Z INFO ExtHandler Downloaded certificate {'thumbprint': '6249EF38A94AD18564DD44227B5BF6C3A6BF8395', 'hasPrivateKey': True} Apr 12 18:29:36.187401 waagent[1500]: 2024-04-12T18:29:36.187342Z INFO ExtHandler Downloaded certificate {'thumbprint': '5881BD925595713C08D2783F16E858A2233096FB', 'hasPrivateKey': False} Apr 12 18:29:36.188679 waagent[1500]: 2024-04-12T18:29:36.188620Z INFO ExtHandler Fetch goal state completed Apr 12 18:29:36.212736 waagent[1500]: 2024-04-12T18:29:36.212621Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Apr 12 18:29:36.225466 waagent[1500]: 2024-04-12T18:29:36.225363Z INFO ExtHandler ExtHandler WALinuxAgent-2.10.0.8 running as process 1500 Apr 12 18:29:36.229296 waagent[1500]: 2024-04-12T18:29:36.229216Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 12 18:29:36.230940 waagent[1500]: 2024-04-12T18:29:36.230875Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 12 18:29:36.236185 waagent[1500]: 2024-04-12T18:29:36.236130Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 12 18:29:36.236705 waagent[1500]: 2024-04-12T18:29:36.236650Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 12 18:29:36.244636 waagent[1500]: 2024-04-12T18:29:36.244579Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 12 18:29:36.245347 waagent[1500]: 2024-04-12T18:29:36.245290Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Apr 12 18:29:36.251840 waagent[1500]: 2024-04-12T18:29:36.251727Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 12 18:29:36.253056 waagent[1500]: 2024-04-12T18:29:36.252994Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 12 18:29:36.254836 waagent[1500]: 2024-04-12T18:29:36.254768Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 12 18:29:36.255121 waagent[1500]: 2024-04-12T18:29:36.255037Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:36.255685 waagent[1500]: 2024-04-12T18:29:36.255620Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:36.256350 waagent[1500]: 2024-04-12T18:29:36.256279Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 12 18:29:36.256666 waagent[1500]: 2024-04-12T18:29:36.256607Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 12 18:29:36.256666 waagent[1500]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 12 18:29:36.256666 waagent[1500]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 12 18:29:36.256666 waagent[1500]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 12 18:29:36.256666 waagent[1500]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:36.256666 waagent[1500]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:36.256666 waagent[1500]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:36.259050 waagent[1500]: 2024-04-12T18:29:36.258933Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 12 18:29:36.259518 waagent[1500]: 2024-04-12T18:29:36.259448Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:36.259791 waagent[1500]: 2024-04-12T18:29:36.259734Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:36.260620 waagent[1500]: 2024-04-12T18:29:36.260541Z INFO EnvHandler ExtHandler Configure routes Apr 12 18:29:36.260790 waagent[1500]: 2024-04-12T18:29:36.260738Z INFO EnvHandler ExtHandler Gateway:None Apr 12 18:29:36.260909 waagent[1500]: 2024-04-12T18:29:36.260865Z INFO EnvHandler ExtHandler Routes:None Apr 12 18:29:36.261426 waagent[1500]: 2024-04-12T18:29:36.261355Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 12 18:29:36.264423 waagent[1500]: 2024-04-12T18:29:36.264350Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 12 18:29:36.267825 waagent[1500]: 2024-04-12T18:29:36.267553Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 12 18:29:36.268025 waagent[1500]: 2024-04-12T18:29:36.267953Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 12 18:29:36.270834 waagent[1500]: 2024-04-12T18:29:36.270677Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 12 18:29:36.278296 waagent[1500]: 2024-04-12T18:29:36.278215Z INFO MonitorHandler ExtHandler Network interfaces: Apr 12 18:29:36.278296 waagent[1500]: Executing ['ip', '-a', '-o', 'link']: Apr 12 18:29:36.278296 waagent[1500]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 12 18:29:36.278296 waagent[1500]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:04:22 brd ff:ff:ff:ff:ff:ff Apr 12 18:29:36.278296 waagent[1500]: 3: enP34294s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:04:22 brd ff:ff:ff:ff:ff:ff\ altname enP34294p0s2 Apr 12 18:29:36.278296 waagent[1500]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 12 18:29:36.278296 waagent[1500]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 12 18:29:36.278296 waagent[1500]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 12 18:29:36.278296 waagent[1500]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 12 18:29:36.278296 waagent[1500]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Apr 12 18:29:36.278296 waagent[1500]: 2: eth0 inet6 fe80::222:48ff:febb:422/64 scope link \ valid_lft forever preferred_lft forever Apr 12 18:29:36.286828 waagent[1500]: 2024-04-12T18:29:36.286748Z INFO ExtHandler ExtHandler Downloading agent manifest Apr 12 18:29:36.318477 waagent[1500]: 2024-04-12T18:29:36.318362Z INFO ExtHandler ExtHandler Apr 12 18:29:36.318786 waagent[1500]: 2024-04-12T18:29:36.318735Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a03c3000-6771-46c3-8ee3-6016ccc89eb4 correlation 77c54a43-1b36-4d5b-b8c0-414912988645 created: 2024-04-12T18:27:36.112473Z] Apr 12 18:29:36.319975 waagent[1500]: 2024-04-12T18:29:36.319918Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 12 18:29:36.321906 waagent[1500]: 2024-04-12T18:29:36.321853Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Apr 12 18:29:36.345578 waagent[1500]: 2024-04-12T18:29:36.345478Z INFO ExtHandler ExtHandler Looking for existing remote access users. Apr 12 18:29:36.368539 waagent[1500]: 2024-04-12T18:29:36.368464Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.10.0.8 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8E1E8530-A3D0-4301-A14F-E8802775C21C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Apr 12 18:29:36.582499 waagent[1500]: 2024-04-12T18:29:36.582340Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 12 18:29:36.582499 waagent[1500]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:36.582499 waagent[1500]: pkts bytes target prot opt in out source destination Apr 12 18:29:36.582499 waagent[1500]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:36.582499 waagent[1500]: pkts bytes target prot opt in out source destination Apr 12 18:29:36.582499 waagent[1500]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:36.582499 waagent[1500]: pkts bytes target prot opt in out source destination Apr 12 18:29:36.582499 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 12 18:29:36.582499 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 12 18:29:36.582499 waagent[1500]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 12 18:29:36.590953 waagent[1500]: 2024-04-12T18:29:36.590814Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 12 18:29:36.590953 waagent[1500]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:36.590953 waagent[1500]: pkts bytes target prot opt in out source destination Apr 12 18:29:36.590953 waagent[1500]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:36.590953 waagent[1500]: pkts bytes target prot opt in out source destination Apr 12 18:29:36.590953 waagent[1500]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:36.590953 waagent[1500]: pkts bytes target prot opt in out source destination Apr 12 18:29:36.590953 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 12 18:29:36.590953 waagent[1500]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 12 18:29:36.590953 waagent[1500]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 12 18:29:36.591506 waagent[1500]: 2024-04-12T18:29:36.591451Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 12 18:29:48.311602 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 12 18:30:06.924199 update_engine[1303]: I0412 18:30:06.924144 1303 update_attempter.cc:509] Updating boot flags... Apr 12 18:30:18.596138 systemd[1]: Created slice system-sshd.slice. Apr 12 18:30:18.597242 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.12.6:58418.service. Apr 12 18:30:19.256298 sshd[1619]: Accepted publickey for core from 10.200.12.6 port 58418 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:19.274915 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:19.278677 systemd-logind[1301]: New session 3 of user core. Apr 12 18:30:19.279459 systemd[1]: Started session-3.scope. Apr 12 18:30:19.634577 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.12.6:58428.service. Apr 12 18:30:20.062185 sshd[1624]: Accepted publickey for core from 10.200.12.6 port 58428 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:20.063748 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:20.067726 systemd[1]: Started session-4.scope. Apr 12 18:30:20.068953 systemd-logind[1301]: New session 4 of user core. Apr 12 18:30:20.375368 sshd[1624]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:20.377820 systemd[1]: sshd@1-10.200.20.15:22-10.200.12.6:58428.service: Deactivated successfully. Apr 12 18:30:20.378533 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:30:20.379028 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:30:20.379939 systemd-logind[1301]: Removed session 4. Apr 12 18:30:20.441882 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.12.6:58430.service. Apr 12 18:30:20.836066 sshd[1630]: Accepted publickey for core from 10.200.12.6 port 58430 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:20.837530 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:20.840947 systemd-logind[1301]: New session 5 of user core. Apr 12 18:30:20.841379 systemd[1]: Started session-5.scope. Apr 12 18:30:21.129394 sshd[1630]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:21.131405 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:30:21.131949 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:30:21.132088 systemd[1]: sshd@2-10.200.20.15:22-10.200.12.6:58430.service: Deactivated successfully. Apr 12 18:30:21.132978 systemd-logind[1301]: Removed session 5. Apr 12 18:30:21.196248 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.12.6:58444.service. Apr 12 18:30:21.594943 sshd[1636]: Accepted publickey for core from 10.200.12.6 port 58444 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:21.596487 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:21.600055 systemd-logind[1301]: New session 6 of user core. Apr 12 18:30:21.600529 systemd[1]: Started session-6.scope. Apr 12 18:30:21.895557 sshd[1636]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:21.897789 systemd[1]: sshd@3-10.200.20.15:22-10.200.12.6:58444.service: Deactivated successfully. Apr 12 18:30:21.898463 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:30:21.898998 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:30:21.899828 systemd-logind[1301]: Removed session 6. Apr 12 18:30:21.965187 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.12.6:58446.service. Apr 12 18:30:22.362168 sshd[1642]: Accepted publickey for core from 10.200.12.6 port 58446 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:22.363470 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:22.367258 systemd-logind[1301]: New session 7 of user core. Apr 12 18:30:22.367665 systemd[1]: Started session-7.scope. Apr 12 18:30:22.909215 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:30:22.909413 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:30:23.641528 systemd[1]: Starting docker.service... Apr 12 18:30:23.693240 env[1660]: time="2024-04-12T18:30:23.693187728Z" level=info msg="Starting up" Apr 12 18:30:23.694217 env[1660]: time="2024-04-12T18:30:23.694192106Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:30:23.694217 env[1660]: time="2024-04-12T18:30:23.694214666Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:30:23.694302 env[1660]: time="2024-04-12T18:30:23.694233426Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:30:23.694302 env[1660]: time="2024-04-12T18:30:23.694243225Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:30:23.695545 env[1660]: time="2024-04-12T18:30:23.695520918Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:30:23.695545 env[1660]: time="2024-04-12T18:30:23.695540478Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:30:23.695651 env[1660]: time="2024-04-12T18:30:23.695552237Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:30:23.695651 env[1660]: time="2024-04-12T18:30:23.695562357Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:30:23.769069 env[1660]: time="2024-04-12T18:30:23.769022027Z" level=info msg="Loading containers: start." Apr 12 18:30:23.961082 kernel: Initializing XFRM netlink socket Apr 12 18:30:23.999232 env[1660]: time="2024-04-12T18:30:23.999194826Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:30:24.144032 systemd-networkd[1460]: docker0: Link UP Apr 12 18:30:24.164545 env[1660]: time="2024-04-12T18:30:24.164510345Z" level=info msg="Loading containers: done." Apr 12 18:30:24.173370 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2724879093-merged.mount: Deactivated successfully. Apr 12 18:30:24.187167 env[1660]: time="2024-04-12T18:30:24.187123234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:30:24.187338 env[1660]: time="2024-04-12T18:30:24.187315950Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:30:24.187439 env[1660]: time="2024-04-12T18:30:24.187419708Z" level=info msg="Daemon has completed initialization" Apr 12 18:30:24.227309 systemd[1]: Started docker.service. Apr 12 18:30:24.233384 env[1660]: time="2024-04-12T18:30:24.233140316Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:30:24.248880 systemd[1]: Reloading. Apr 12 18:30:24.313701 /usr/lib/systemd/system-generators/torcx-generator[1788]: time="2024-04-12T18:30:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:30:24.314145 /usr/lib/systemd/system-generators/torcx-generator[1788]: time="2024-04-12T18:30:24Z" level=info msg="torcx already run" Apr 12 18:30:24.384857 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:30:24.384878 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:30:24.400103 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:30:24.481117 systemd[1]: Started kubelet.service. Apr 12 18:30:24.550258 kubelet[1848]: E0412 18:30:24.550133 1848 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:30:24.555563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:30:24.555684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:30:28.906314 env[1311]: time="2024-04-12T18:30:28.906235469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.8\"" Apr 12 18:30:29.816965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805764452.mount: Deactivated successfully. Apr 12 18:30:31.574666 env[1311]: time="2024-04-12T18:30:31.574610701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.583336 env[1311]: time="2024-04-12T18:30:31.583297791Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:883d43b86efe04c7ca7bcd566f873179fa9c1dbceb67e32cd5d30213c3bc17de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.588768 env[1311]: time="2024-04-12T18:30:31.588732977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.594266 env[1311]: time="2024-04-12T18:30:31.594231642Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:7e7f3c806333528451a1e0bfdf17da0341adaea7d50a703db9c2005c474a97b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.595147 env[1311]: time="2024-04-12T18:30:31.595120226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.8\" returns image reference \"sha256:883d43b86efe04c7ca7bcd566f873179fa9c1dbceb67e32cd5d30213c3bc17de\"" Apr 12 18:30:31.604791 env[1311]: time="2024-04-12T18:30:31.604756820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.8\"" Apr 12 18:30:33.439443 env[1311]: time="2024-04-12T18:30:33.439396651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.453728 env[1311]: time="2024-04-12T18:30:33.453667016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7beedd93d8e53aab4b98613d37758450bbbac01a94f42cdb7670da900d1e11d8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.468102 env[1311]: time="2024-04-12T18:30:33.467550268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.481784 env[1311]: time="2024-04-12T18:30:33.481738715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:f3d0e8da9d1532e081e719a985e89a0cfe1a29d127773ad8e2c2fee1dd10fd00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.482677 env[1311]: time="2024-04-12T18:30:33.482647300Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.8\" returns image reference \"sha256:7beedd93d8e53aab4b98613d37758450bbbac01a94f42cdb7670da900d1e11d8\"" Apr 12 18:30:33.491583 env[1311]: time="2024-04-12T18:30:33.491547753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.8\"" Apr 12 18:30:34.580862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:30:34.581041 systemd[1]: Stopped kubelet.service. Apr 12 18:30:34.582437 systemd[1]: Started kubelet.service. Apr 12 18:30:34.631686 kubelet[1876]: E0412 18:30:34.631635 1876 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:30:34.633862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:30:34.634002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:30:34.826158 env[1311]: time="2024-04-12T18:30:34.826115235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.839539 env[1311]: time="2024-04-12T18:30:34.839216625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36dcd04414a4b2b645ad6da4fd60a5d1479f6eb9da01a928082abb025958a687,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.845237 env[1311]: time="2024-04-12T18:30:34.845199409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.853668 env[1311]: time="2024-04-12T18:30:34.853643874Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:4d61604f259d3c91d8b3ec7a6a999f5eae9aff371567151cd5165eaa698c6d7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.854497 env[1311]: time="2024-04-12T18:30:34.854472781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.8\" returns image reference \"sha256:36dcd04414a4b2b645ad6da4fd60a5d1479f6eb9da01a928082abb025958a687\"" Apr 12 18:30:34.863416 env[1311]: time="2024-04-12T18:30:34.863386958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.8\"" Apr 12 18:30:36.033282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887804001.mount: Deactivated successfully. Apr 12 18:30:36.807914 env[1311]: time="2024-04-12T18:30:36.807859873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:36.816964 env[1311]: time="2024-04-12T18:30:36.816900375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:837f825eec6c170d5e5bbfbd7bb0a4afac97759d0f0c57b80e4712d417fd690b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:36.821874 env[1311]: time="2024-04-12T18:30:36.821821420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:36.826373 env[1311]: time="2024-04-12T18:30:36.826333912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9e9dd46799712c58e1a49f973374ffa9ad4e5a6175896e5d805a8738bf5c5865,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:36.826821 env[1311]: time="2024-04-12T18:30:36.826795905Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.8\" returns image reference \"sha256:837f825eec6c170d5e5bbfbd7bb0a4afac97759d0f0c57b80e4712d417fd690b\"" Apr 12 18:30:36.835170 env[1311]: time="2024-04-12T18:30:36.835139657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:30:37.474017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297176564.mount: Deactivated successfully. Apr 12 18:30:37.518367 env[1311]: time="2024-04-12T18:30:37.518314040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.526717 env[1311]: time="2024-04-12T18:30:37.526685556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.531104 env[1311]: time="2024-04-12T18:30:37.531078491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.536631 env[1311]: time="2024-04-12T18:30:37.536591169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.537325 env[1311]: time="2024-04-12T18:30:37.537297558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 12 18:30:37.545965 env[1311]: time="2024-04-12T18:30:37.545928030Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Apr 12 18:30:38.236324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161259254.mount: Deactivated successfully. Apr 12 18:30:41.781595 env[1311]: time="2024-04-12T18:30:41.781551621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:41.789995 env[1311]: time="2024-04-12T18:30:41.789939908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:41.795228 env[1311]: time="2024-04-12T18:30:41.795193797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:41.801381 env[1311]: time="2024-04-12T18:30:41.801350794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:41.802413 env[1311]: time="2024-04-12T18:30:41.802388260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Apr 12 18:30:41.811359 env[1311]: time="2024-04-12T18:30:41.811282180Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 18:30:42.516050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664435935.mount: Deactivated successfully. Apr 12 18:30:42.982768 env[1311]: time="2024-04-12T18:30:42.982715632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:42.990315 env[1311]: time="2024-04-12T18:30:42.990272053Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:42.994430 env[1311]: time="2024-04-12T18:30:42.994394038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:42.998475 env[1311]: time="2024-04-12T18:30:42.998441025Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:42.999041 env[1311]: time="2024-04-12T18:30:42.999013578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Apr 12 18:30:44.830850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:30:44.831028 systemd[1]: Stopped kubelet.service. Apr 12 18:30:44.832404 systemd[1]: Started kubelet.service. Apr 12 18:30:44.880701 kubelet[1960]: E0412 18:30:44.880662 1960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:30:44.882475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:30:44.882605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:30:47.261805 systemd[1]: Stopped kubelet.service. Apr 12 18:30:47.276752 systemd[1]: Reloading. Apr 12 18:30:47.337215 /usr/lib/systemd/system-generators/torcx-generator[1990]: time="2024-04-12T18:30:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:30:47.339161 /usr/lib/systemd/system-generators/torcx-generator[1990]: time="2024-04-12T18:30:47Z" level=info msg="torcx already run" Apr 12 18:30:47.407823 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:30:47.407986 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:30:47.423016 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:30:47.507640 systemd[1]: Started kubelet.service. Apr 12 18:30:47.552103 kubelet[2050]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:47.552103 kubelet[2050]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:30:47.552103 kubelet[2050]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:47.552492 kubelet[2050]: I0412 18:30:47.552075 2050 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:30:48.914360 kubelet[2050]: I0412 18:30:48.914322 2050 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Apr 12 18:30:48.914360 kubelet[2050]: I0412 18:30:48.914351 2050 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:30:48.914685 kubelet[2050]: I0412 18:30:48.914569 2050 server.go:895] "Client rotation is on, will bootstrap in background" Apr 12 18:30:48.920162 kubelet[2050]: I0412 18:30:48.920133 2050 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:30:48.920357 kubelet[2050]: E0412 18:30:48.920336 2050 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:48.923872 kubelet[2050]: W0412 18:30:48.923852 2050 machine.go:65] Cannot read vendor id correctly, set empty. Apr 12 18:30:48.924305 kubelet[2050]: I0412 18:30:48.924287 2050 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:30:48.924473 kubelet[2050]: I0412 18:30:48.924459 2050 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:30:48.924611 kubelet[2050]: I0412 18:30:48.924599 2050 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:30:48.924698 kubelet[2050]: I0412 18:30:48.924619 2050 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:30:48.924698 kubelet[2050]: I0412 18:30:48.924628 2050 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:30:48.924752 kubelet[2050]: I0412 18:30:48.924710 2050 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:48.924813 kubelet[2050]: I0412 18:30:48.924799 2050 kubelet.go:393] "Attempting to sync node with API server" Apr 12 18:30:48.924855 kubelet[2050]: I0412 18:30:48.924817 2050 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:30:48.924855 kubelet[2050]: I0412 18:30:48.924833 2050 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:30:48.924855 kubelet[2050]: I0412 18:30:48.924845 2050 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:30:48.925492 kubelet[2050]: W0412 18:30:48.925433 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-58e6b5da18&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:48.925620 kubelet[2050]: E0412 18:30:48.925496 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-58e6b5da18&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:48.925620 kubelet[2050]: W0412 18:30:48.925546 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:48.925620 kubelet[2050]: E0412 18:30:48.925568 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:48.925697 kubelet[2050]: I0412 18:30:48.925642 2050 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:30:48.925877 kubelet[2050]: W0412 18:30:48.925840 2050 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:30:48.926274 kubelet[2050]: I0412 18:30:48.926252 2050 server.go:1232] "Started kubelet" Apr 12 18:30:48.930166 kubelet[2050]: E0412 18:30:48.930151 2050 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:30:48.930275 kubelet[2050]: E0412 18:30:48.930264 2050 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:30:48.930587 kubelet[2050]: E0412 18:30:48.930516 2050 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.3-a-58e6b5da18.17c59be6d071a680", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.3-a-58e6b5da18", UID:"ci-3510.3.3-a-58e6b5da18", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.3-a-58e6b5da18"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 30, 48, 926234240, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 30, 48, 926234240, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.3-a-58e6b5da18"}': 'Post "https://10.200.20.15:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.15:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:30:48.931631 kubelet[2050]: I0412 18:30:48.931618 2050 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:30:48.932474 kubelet[2050]: I0412 18:30:48.932461 2050 server.go:462] "Adding debug handlers to kubelet server" Apr 12 18:30:48.933521 kubelet[2050]: I0412 18:30:48.933499 2050 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:30:48.933783 kubelet[2050]: I0412 18:30:48.933773 2050 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:30:48.933863 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:30:48.934142 kubelet[2050]: I0412 18:30:48.934124 2050 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:30:48.934899 kubelet[2050]: I0412 18:30:48.934871 2050 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:30:48.935493 kubelet[2050]: I0412 18:30:48.935478 2050 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:30:48.935595 kubelet[2050]: I0412 18:30:48.935550 2050 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:30:48.935799 kubelet[2050]: E0412 18:30:48.935786 2050 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-58e6b5da18?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Apr 12 18:30:48.936337 kubelet[2050]: W0412 18:30:48.936087 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:48.936337 kubelet[2050]: E0412 18:30:48.936131 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.010127 kubelet[2050]: I0412 18:30:49.010096 2050 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:30:49.010127 kubelet[2050]: I0412 18:30:49.010116 2050 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:30:49.010127 kubelet[2050]: I0412 18:30:49.010131 2050 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:49.011844 kubelet[2050]: I0412 18:30:49.011821 2050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:30:49.013528 kubelet[2050]: I0412 18:30:49.013499 2050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:30:49.013528 kubelet[2050]: I0412 18:30:49.013525 2050 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:30:49.014384 kubelet[2050]: I0412 18:30:49.013540 2050 kubelet.go:2303] "Starting kubelet main sync loop" Apr 12 18:30:49.014384 kubelet[2050]: E0412 18:30:49.013579 2050 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:30:49.014384 kubelet[2050]: W0412 18:30:49.014358 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.014488 kubelet[2050]: E0412 18:30:49.014388 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.016244 kubelet[2050]: I0412 18:30:49.016213 2050 policy_none.go:49] "None policy: Start" Apr 12 18:30:49.016811 kubelet[2050]: I0412 18:30:49.016791 2050 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:30:49.016889 kubelet[2050]: I0412 18:30:49.016830 2050 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:30:49.024643 systemd[1]: Created slice kubepods.slice. Apr 12 18:30:49.028690 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:30:49.031399 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:30:49.036757 kubelet[2050]: I0412 18:30:49.036723 2050 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.037510 kubelet[2050]: E0412 18:30:49.037172 2050 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.037863 kubelet[2050]: I0412 18:30:49.037838 2050 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:30:49.038043 kubelet[2050]: I0412 18:30:49.038024 2050 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:30:49.039316 kubelet[2050]: E0412 18:30:49.038788 2050 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.3-a-58e6b5da18\" not found" Apr 12 18:30:49.113953 kubelet[2050]: I0412 18:30:49.113923 2050 topology_manager.go:215] "Topology Admit Handler" podUID="b7e5aa51624dd658e12a88a954a0cfe1" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.115533 kubelet[2050]: I0412 18:30:49.115512 2050 topology_manager.go:215] "Topology Admit Handler" podUID="b369eb2026983d89ec0c8393237a0a6e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.117194 kubelet[2050]: I0412 18:30:49.117167 2050 topology_manager.go:215] "Topology Admit Handler" podUID="aa45750cd3001aa72a94700152b1153b" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.121886 systemd[1]: Created slice kubepods-burstable-podb7e5aa51624dd658e12a88a954a0cfe1.slice. Apr 12 18:30:49.132748 systemd[1]: Created slice kubepods-burstable-podaa45750cd3001aa72a94700152b1153b.slice. Apr 12 18:30:49.136704 kubelet[2050]: E0412 18:30:49.136678 2050 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-58e6b5da18?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Apr 12 18:30:49.139946 systemd[1]: Created slice kubepods-burstable-podb369eb2026983d89ec0c8393237a0a6e.slice. Apr 12 18:30:49.238746 kubelet[2050]: I0412 18:30:49.237154 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238746 kubelet[2050]: I0412 18:30:49.237198 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa45750cd3001aa72a94700152b1153b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-58e6b5da18\" (UID: \"aa45750cd3001aa72a94700152b1153b\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238746 kubelet[2050]: I0412 18:30:49.237233 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238746 kubelet[2050]: I0412 18:30:49.237254 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7e5aa51624dd658e12a88a954a0cfe1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-58e6b5da18\" (UID: \"b7e5aa51624dd658e12a88a954a0cfe1\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238746 kubelet[2050]: I0412 18:30:49.237275 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7e5aa51624dd658e12a88a954a0cfe1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-58e6b5da18\" (UID: \"b7e5aa51624dd658e12a88a954a0cfe1\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238978 kubelet[2050]: I0412 18:30:49.237301 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238978 kubelet[2050]: I0412 18:30:49.237320 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238978 kubelet[2050]: I0412 18:30:49.237338 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238978 kubelet[2050]: I0412 18:30:49.237357 2050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7e5aa51624dd658e12a88a954a0cfe1-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-58e6b5da18\" (UID: \"b7e5aa51624dd658e12a88a954a0cfe1\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.238978 kubelet[2050]: I0412 18:30:49.238911 2050 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.239338 kubelet[2050]: E0412 18:30:49.239319 2050 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.431861 env[1311]: time="2024-04-12T18:30:49.431613094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-58e6b5da18,Uid:b7e5aa51624dd658e12a88a954a0cfe1,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:49.438921 env[1311]: time="2024-04-12T18:30:49.438882613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-58e6b5da18,Uid:aa45750cd3001aa72a94700152b1153b,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:49.442799 env[1311]: time="2024-04-12T18:30:49.442668530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-58e6b5da18,Uid:b369eb2026983d89ec0c8393237a0a6e,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:49.538240 kubelet[2050]: E0412 18:30:49.537793 2050 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-58e6b5da18?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Apr 12 18:30:49.641811 kubelet[2050]: I0412 18:30:49.641497 2050 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.641811 kubelet[2050]: E0412 18:30:49.641786 2050 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:49.751906 kubelet[2050]: W0412 18:30:49.751810 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-58e6b5da18&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.751906 kubelet[2050]: E0412 18:30:49.751871 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-58e6b5da18&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.962830 kubelet[2050]: W0412 18:30:49.962793 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.962830 kubelet[2050]: E0412 18:30:49.962835 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.964138 kubelet[2050]: W0412 18:30:49.964039 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:49.964216 kubelet[2050]: E0412 18:30:49.964144 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:50.098052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965653894.mount: Deactivated successfully. Apr 12 18:30:50.128707 env[1311]: time="2024-04-12T18:30:50.128656537Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.149210 env[1311]: time="2024-04-12T18:30:50.149172713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.159828 env[1311]: time="2024-04-12T18:30:50.159795197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.164382 env[1311]: time="2024-04-12T18:30:50.164355667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.173171 env[1311]: time="2024-04-12T18:30:50.173139291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.182541 env[1311]: time="2024-04-12T18:30:50.182510829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.188716 env[1311]: time="2024-04-12T18:30:50.188677242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.192196 env[1311]: time="2024-04-12T18:30:50.192169563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.202690 env[1311]: time="2024-04-12T18:30:50.202662729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.211289 env[1311]: time="2024-04-12T18:30:50.211261595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.227042 env[1311]: time="2024-04-12T18:30:50.226943224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.241634 env[1311]: time="2024-04-12T18:30:50.241592104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:50.262879 env[1311]: time="2024-04-12T18:30:50.258016924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:50.262879 env[1311]: time="2024-04-12T18:30:50.258053764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:50.262879 env[1311]: time="2024-04-12T18:30:50.258100883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:50.262879 env[1311]: time="2024-04-12T18:30:50.258226882Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f545a5b6fc5ee70b2669259dba5985b5126bddc31a6243cb43ba892d20ee0ff pid=2089 runtime=io.containerd.runc.v2 Apr 12 18:30:50.280625 systemd[1]: Started cri-containerd-7f545a5b6fc5ee70b2669259dba5985b5126bddc31a6243cb43ba892d20ee0ff.scope. Apr 12 18:30:50.317265 env[1311]: time="2024-04-12T18:30:50.317220677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-58e6b5da18,Uid:b7e5aa51624dd658e12a88a954a0cfe1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f545a5b6fc5ee70b2669259dba5985b5126bddc31a6243cb43ba892d20ee0ff\"" Apr 12 18:30:50.321300 env[1311]: time="2024-04-12T18:30:50.321242353Z" level=info msg="CreateContainer within sandbox \"7f545a5b6fc5ee70b2669259dba5985b5126bddc31a6243cb43ba892d20ee0ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:30:50.326906 env[1311]: time="2024-04-12T18:30:50.326795253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:50.326906 env[1311]: time="2024-04-12T18:30:50.326833212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:50.326906 env[1311]: time="2024-04-12T18:30:50.326844132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:50.329458 env[1311]: time="2024-04-12T18:30:50.328416875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66349284332c8ab88b8003a4e960e02d0405fa34da8adc6b53d031f744b3f00a pid=2131 runtime=io.containerd.runc.v2 Apr 12 18:30:50.338186 env[1311]: time="2024-04-12T18:30:50.338086649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:50.338186 env[1311]: time="2024-04-12T18:30:50.338159089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:50.338385 env[1311]: time="2024-04-12T18:30:50.338170249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:50.339027 kubelet[2050]: E0412 18:30:50.339000 2050 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-58e6b5da18?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="1.6s" Apr 12 18:30:50.339185 env[1311]: time="2024-04-12T18:30:50.338650563Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a17a35b31216656b8116fa9d81c8ed067214fcb4310da17f898f0d47b58ea52 pid=2153 runtime=io.containerd.runc.v2 Apr 12 18:30:50.346201 systemd[1]: Started cri-containerd-66349284332c8ab88b8003a4e960e02d0405fa34da8adc6b53d031f744b3f00a.scope. Apr 12 18:30:50.356534 systemd[1]: Started cri-containerd-6a17a35b31216656b8116fa9d81c8ed067214fcb4310da17f898f0d47b58ea52.scope. Apr 12 18:30:50.364708 kubelet[2050]: W0412 18:30:50.364602 2050 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:50.364708 kubelet[2050]: E0412 18:30:50.364671 2050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Apr 12 18:30:50.371999 env[1311]: time="2024-04-12T18:30:50.371931520Z" level=info msg="CreateContainer within sandbox \"7f545a5b6fc5ee70b2669259dba5985b5126bddc31a6243cb43ba892d20ee0ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0805b53d664844ee4fe519421395cb581c584800d8e6929759d76558ab7627e0\"" Apr 12 18:30:50.373419 env[1311]: time="2024-04-12T18:30:50.373383544Z" level=info msg="StartContainer for \"0805b53d664844ee4fe519421395cb581c584800d8e6929759d76558ab7627e0\"" Apr 12 18:30:50.396368 systemd[1]: Started cri-containerd-0805b53d664844ee4fe519421395cb581c584800d8e6929759d76558ab7627e0.scope. Apr 12 18:30:50.410084 env[1311]: time="2024-04-12T18:30:50.410023504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-58e6b5da18,Uid:b369eb2026983d89ec0c8393237a0a6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a17a35b31216656b8116fa9d81c8ed067214fcb4310da17f898f0d47b58ea52\"" Apr 12 18:30:50.412741 env[1311]: time="2024-04-12T18:30:50.412703794Z" level=info msg="CreateContainer within sandbox \"6a17a35b31216656b8116fa9d81c8ed067214fcb4310da17f898f0d47b58ea52\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:30:50.416298 env[1311]: time="2024-04-12T18:30:50.416267715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-58e6b5da18,Uid:aa45750cd3001aa72a94700152b1153b,Namespace:kube-system,Attempt:0,} returns sandbox id \"66349284332c8ab88b8003a4e960e02d0405fa34da8adc6b53d031f744b3f00a\"" Apr 12 18:30:50.420383 env[1311]: time="2024-04-12T18:30:50.420351991Z" level=info msg="CreateContainer within sandbox \"66349284332c8ab88b8003a4e960e02d0405fa34da8adc6b53d031f744b3f00a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:30:50.444631 kubelet[2050]: I0412 18:30:50.444600 2050 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:50.444961 kubelet[2050]: E0412 18:30:50.444943 2050 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:50.451143 env[1311]: time="2024-04-12T18:30:50.451093975Z" level=info msg="StartContainer for \"0805b53d664844ee4fe519421395cb581c584800d8e6929759d76558ab7627e0\" returns successfully" Apr 12 18:30:50.489514 env[1311]: time="2024-04-12T18:30:50.489395956Z" level=info msg="CreateContainer within sandbox \"6a17a35b31216656b8116fa9d81c8ed067214fcb4310da17f898f0d47b58ea52\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872\"" Apr 12 18:30:50.490346 env[1311]: time="2024-04-12T18:30:50.490294947Z" level=info msg="StartContainer for \"62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872\"" Apr 12 18:30:50.502982 env[1311]: time="2024-04-12T18:30:50.502934169Z" level=info msg="CreateContainer within sandbox \"66349284332c8ab88b8003a4e960e02d0405fa34da8adc6b53d031f744b3f00a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6\"" Apr 12 18:30:50.504671 env[1311]: time="2024-04-12T18:30:50.503660201Z" level=info msg="StartContainer for \"ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6\"" Apr 12 18:30:50.510289 systemd[1]: Started cri-containerd-62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872.scope. Apr 12 18:30:50.529676 systemd[1]: Started cri-containerd-ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6.scope. Apr 12 18:30:50.572318 env[1311]: time="2024-04-12T18:30:50.572259451Z" level=info msg="StartContainer for \"62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872\" returns successfully" Apr 12 18:30:50.621133 env[1311]: time="2024-04-12T18:30:50.621075878Z" level=info msg="StartContainer for \"ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6\" returns successfully" Apr 12 18:30:51.094176 systemd[1]: run-containerd-runc-k8s.io-7f545a5b6fc5ee70b2669259dba5985b5126bddc31a6243cb43ba892d20ee0ff-runc.AlzzL4.mount: Deactivated successfully. Apr 12 18:30:52.046401 kubelet[2050]: I0412 18:30:52.046361 2050 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:53.308724 kubelet[2050]: E0412 18:30:53.308686 2050 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.3-a-58e6b5da18\" not found" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:53.323958 kubelet[2050]: I0412 18:30:53.323925 2050 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:53.355495 kubelet[2050]: E0412 18:30:53.355467 2050 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-58e6b5da18\" not found" Apr 12 18:30:53.927272 kubelet[2050]: I0412 18:30:53.927236 2050 apiserver.go:52] "Watching apiserver" Apr 12 18:30:53.936085 kubelet[2050]: I0412 18:30:53.936049 2050 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:30:56.088209 systemd[1]: Reloading. Apr 12 18:30:56.162731 /usr/lib/systemd/system-generators/torcx-generator[2342]: time="2024-04-12T18:30:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:30:56.163182 /usr/lib/systemd/system-generators/torcx-generator[2342]: time="2024-04-12T18:30:56Z" level=info msg="torcx already run" Apr 12 18:30:56.224297 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:30:56.224314 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:30:56.239521 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:30:56.343355 kubelet[2050]: I0412 18:30:56.343240 2050 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:30:56.343888 systemd[1]: Stopping kubelet.service... Apr 12 18:30:56.363643 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:30:56.363844 systemd[1]: Stopped kubelet.service. Apr 12 18:30:56.363894 systemd[1]: kubelet.service: Consumed 1.688s CPU time. Apr 12 18:30:56.365648 systemd[1]: Started kubelet.service. Apr 12 18:30:56.442037 kubelet[2402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:56.442037 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:30:56.442037 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:56.442421 kubelet[2402]: I0412 18:30:56.442096 2402 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:30:56.446255 kubelet[2402]: I0412 18:30:56.446229 2402 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Apr 12 18:30:56.446388 kubelet[2402]: I0412 18:30:56.446378 2402 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:30:56.446604 kubelet[2402]: I0412 18:30:56.446591 2402 server.go:895] "Client rotation is on, will bootstrap in background" Apr 12 18:30:56.448175 kubelet[2402]: I0412 18:30:56.448159 2402 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:30:56.449278 kubelet[2402]: I0412 18:30:56.449252 2402 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:30:56.457366 kubelet[2402]: W0412 18:30:56.454689 2402 machine.go:65] Cannot read vendor id correctly, set empty. Apr 12 18:30:56.457366 kubelet[2402]: I0412 18:30:56.455882 2402 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:30:56.457366 kubelet[2402]: I0412 18:30:56.456124 2402 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:30:56.458451 kubelet[2402]: I0412 18:30:56.456502 2402 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:30:56.459944 kubelet[2402]: I0412 18:30:56.459922 2402 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:30:56.459992 kubelet[2402]: I0412 18:30:56.459950 2402 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:30:56.460020 kubelet[2402]: I0412 18:30:56.460000 2402 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:56.460241 kubelet[2402]: I0412 18:30:56.460121 2402 kubelet.go:393] "Attempting to sync node with API server" Apr 12 18:30:56.460241 kubelet[2402]: I0412 18:30:56.460141 2402 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:30:56.460241 kubelet[2402]: I0412 18:30:56.460159 2402 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:30:56.460241 kubelet[2402]: I0412 18:30:56.460180 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:30:56.467685 kubelet[2402]: I0412 18:30:56.466340 2402 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:30:56.467685 kubelet[2402]: I0412 18:30:56.466762 2402 server.go:1232] "Started kubelet" Apr 12 18:30:56.472684 kubelet[2402]: I0412 18:30:56.472659 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:30:56.482262 kubelet[2402]: I0412 18:30:56.482237 2402 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:30:56.482821 kubelet[2402]: I0412 18:30:56.482797 2402 server.go:462] "Adding debug handlers to kubelet server" Apr 12 18:30:56.483739 kubelet[2402]: I0412 18:30:56.483717 2402 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:30:56.483881 kubelet[2402]: I0412 18:30:56.483863 2402 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:30:56.485136 kubelet[2402]: I0412 18:30:56.485119 2402 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:30:56.490102 kubelet[2402]: I0412 18:30:56.485432 2402 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:30:56.490102 kubelet[2402]: I0412 18:30:56.485565 2402 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:30:56.490102 kubelet[2402]: E0412 18:30:56.486199 2402 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:30:56.490102 kubelet[2402]: E0412 18:30:56.486220 2402 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:30:56.495158 kubelet[2402]: I0412 18:30:56.492342 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:30:56.495158 kubelet[2402]: I0412 18:30:56.493228 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:30:56.495158 kubelet[2402]: I0412 18:30:56.493251 2402 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:30:56.495158 kubelet[2402]: I0412 18:30:56.493265 2402 kubelet.go:2303] "Starting kubelet main sync loop" Apr 12 18:30:56.495158 kubelet[2402]: E0412 18:30:56.493316 2402 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:30:56.557946 kubelet[2402]: I0412 18:30:56.557923 2402 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:30:56.558133 kubelet[2402]: I0412 18:30:56.558121 2402 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:30:56.558203 kubelet[2402]: I0412 18:30:56.558194 2402 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:56.558399 kubelet[2402]: I0412 18:30:56.558389 2402 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:30:56.558512 kubelet[2402]: I0412 18:30:56.558501 2402 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 18:30:56.558570 kubelet[2402]: I0412 18:30:56.558561 2402 policy_none.go:49] "None policy: Start" Apr 12 18:30:56.559354 kubelet[2402]: I0412 18:30:56.559341 2402 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:30:56.559455 kubelet[2402]: I0412 18:30:56.559445 2402 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:30:56.559709 kubelet[2402]: I0412 18:30:56.559697 2402 state_mem.go:75] "Updated machine memory state" Apr 12 18:30:56.563327 kubelet[2402]: I0412 18:30:56.563309 2402 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:30:56.565614 kubelet[2402]: I0412 18:30:56.565302 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:30:56.588114 kubelet[2402]: I0412 18:30:56.588092 2402 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.595577 kubelet[2402]: I0412 18:30:56.594009 2402 topology_manager.go:215] "Topology Admit Handler" podUID="b7e5aa51624dd658e12a88a954a0cfe1" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.595577 kubelet[2402]: I0412 18:30:56.594120 2402 topology_manager.go:215] "Topology Admit Handler" podUID="b369eb2026983d89ec0c8393237a0a6e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.595577 kubelet[2402]: I0412 18:30:56.594160 2402 topology_manager.go:215] "Topology Admit Handler" podUID="aa45750cd3001aa72a94700152b1153b" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.602556 sudo[2430]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:30:56.602753 sudo[2430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:30:56.604281 kubelet[2402]: W0412 18:30:56.604260 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:30:56.607287 kubelet[2402]: W0412 18:30:56.607270 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:30:56.607466 kubelet[2402]: W0412 18:30:56.607452 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:30:56.607927 kubelet[2402]: I0412 18:30:56.607899 2402 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.608087 kubelet[2402]: I0412 18:30:56.608053 2402 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.787293 kubelet[2402]: I0412 18:30:56.787261 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.787482 kubelet[2402]: I0412 18:30:56.787471 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.787582 kubelet[2402]: I0412 18:30:56.787564 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7e5aa51624dd658e12a88a954a0cfe1-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-58e6b5da18\" (UID: \"b7e5aa51624dd658e12a88a954a0cfe1\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.787663 kubelet[2402]: I0412 18:30:56.787655 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7e5aa51624dd658e12a88a954a0cfe1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-58e6b5da18\" (UID: \"b7e5aa51624dd658e12a88a954a0cfe1\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.787759 kubelet[2402]: I0412 18:30:56.787750 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.787845 kubelet[2402]: I0412 18:30:56.787837 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.787927 kubelet[2402]: I0412 18:30:56.787919 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b369eb2026983d89ec0c8393237a0a6e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-58e6b5da18\" (UID: \"b369eb2026983d89ec0c8393237a0a6e\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.788018 kubelet[2402]: I0412 18:30:56.788010 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7e5aa51624dd658e12a88a954a0cfe1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-58e6b5da18\" (UID: \"b7e5aa51624dd658e12a88a954a0cfe1\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:56.788147 kubelet[2402]: I0412 18:30:56.788134 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa45750cd3001aa72a94700152b1153b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-58e6b5da18\" (UID: \"aa45750cd3001aa72a94700152b1153b\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-58e6b5da18" Apr 12 18:30:57.105695 sudo[2430]: pam_unix(sudo:session): session closed for user root Apr 12 18:30:57.466114 kubelet[2402]: I0412 18:30:57.466005 2402 apiserver.go:52] "Watching apiserver" Apr 12 18:30:57.486185 kubelet[2402]: I0412 18:30:57.486147 2402 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:30:57.560561 kubelet[2402]: I0412 18:30:57.560506 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.3-a-58e6b5da18" podStartSLOduration=1.5604622099999998 podCreationTimestamp="2024-04-12 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:30:57.549845709 +0000 UTC m=+1.180026036" watchObservedRunningTime="2024-04-12 18:30:57.56046221 +0000 UTC m=+1.190642537" Apr 12 18:30:57.568566 kubelet[2402]: I0412 18:30:57.568526 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.3-a-58e6b5da18" podStartSLOduration=1.5684948140000001 podCreationTimestamp="2024-04-12 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:30:57.561502 +0000 UTC m=+1.191682327" watchObservedRunningTime="2024-04-12 18:30:57.568494814 +0000 UTC m=+1.198675141" Apr 12 18:30:57.576797 kubelet[2402]: I0412 18:30:57.576768 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" podStartSLOduration=1.576739377 podCreationTimestamp="2024-04-12 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:30:57.56899065 +0000 UTC m=+1.199170977" watchObservedRunningTime="2024-04-12 18:30:57.576739377 +0000 UTC m=+1.206919704" Apr 12 18:30:58.315728 sudo[1645]: pam_unix(sudo:session): session closed for user root Apr 12 18:30:58.407550 sshd[1642]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:58.410320 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:30:58.411427 systemd[1]: sshd@4-10.200.20.15:22-10.200.12.6:58446.service: Deactivated successfully. Apr 12 18:30:58.412150 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:30:58.412325 systemd[1]: session-7.scope: Consumed 5.390s CPU time. Apr 12 18:30:58.412887 systemd-logind[1301]: Removed session 7. Apr 12 18:31:09.913209 kubelet[2402]: I0412 18:31:09.913176 2402 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:31:09.913884 env[1311]: time="2024-04-12T18:31:09.913848819Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:31:09.914373 kubelet[2402]: I0412 18:31:09.914356 2402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:31:10.452347 kubelet[2402]: I0412 18:31:10.452314 2402 topology_manager.go:215] "Topology Admit Handler" podUID="cab4bec0-740d-47c2-9847-c4388e57b8dd" podNamespace="kube-system" podName="kube-proxy-k9h5h" Apr 12 18:31:10.457378 systemd[1]: Created slice kubepods-besteffort-podcab4bec0_740d_47c2_9847_c4388e57b8dd.slice. Apr 12 18:31:10.466475 kubelet[2402]: I0412 18:31:10.466450 2402 topology_manager.go:215] "Topology Admit Handler" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" podNamespace="kube-system" podName="cilium-zxwbm" Apr 12 18:31:10.470840 systemd[1]: Created slice kubepods-burstable-pod9f85174c_79b1_4e8d_a606_b1c1283bf5be.slice. Apr 12 18:31:10.479730 kubelet[2402]: W0412 18:31:10.479705 2402 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:10.479992 kubelet[2402]: E0412 18:31:10.479979 2402 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:10.480077 kubelet[2402]: W0412 18:31:10.479949 2402 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:10.480146 kubelet[2402]: E0412 18:31:10.480137 2402 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:10.480260 kubelet[2402]: W0412 18:31:10.480250 2402 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:10.480395 kubelet[2402]: E0412 18:31:10.480383 2402 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:10.555556 kubelet[2402]: I0412 18:31:10.555525 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cab4bec0-740d-47c2-9847-c4388e57b8dd-xtables-lock\") pod \"kube-proxy-k9h5h\" (UID: \"cab4bec0-740d-47c2-9847-c4388e57b8dd\") " pod="kube-system/kube-proxy-k9h5h" Apr 12 18:31:10.555741 kubelet[2402]: I0412 18:31:10.555729 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q59pr\" (UniqueName: \"kubernetes.io/projected/cab4bec0-740d-47c2-9847-c4388e57b8dd-kube-api-access-q59pr\") pod \"kube-proxy-k9h5h\" (UID: \"cab4bec0-740d-47c2-9847-c4388e57b8dd\") " pod="kube-system/kube-proxy-k9h5h" Apr 12 18:31:10.555827 kubelet[2402]: I0412 18:31:10.555816 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-config-path\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.555896 kubelet[2402]: I0412 18:31:10.555887 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-xtables-lock\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.555974 kubelet[2402]: I0412 18:31:10.555965 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-kernel\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556049 kubelet[2402]: I0412 18:31:10.556039 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-cgroup\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556160 kubelet[2402]: I0412 18:31:10.556149 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-etc-cni-netd\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556241 kubelet[2402]: I0412 18:31:10.556232 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hubble-tls\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556330 kubelet[2402]: I0412 18:31:10.556321 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-run\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556412 kubelet[2402]: I0412 18:31:10.556403 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-lib-modules\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556495 kubelet[2402]: I0412 18:31:10.556486 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-net\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556575 kubelet[2402]: I0412 18:31:10.556566 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cab4bec0-740d-47c2-9847-c4388e57b8dd-kube-proxy\") pod \"kube-proxy-k9h5h\" (UID: \"cab4bec0-740d-47c2-9847-c4388e57b8dd\") " pod="kube-system/kube-proxy-k9h5h" Apr 12 18:31:10.556648 kubelet[2402]: I0412 18:31:10.556639 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-bpf-maps\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556718 kubelet[2402]: I0412 18:31:10.556710 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cni-path\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556790 kubelet[2402]: I0412 18:31:10.556781 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psprr\" (UniqueName: \"kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-kube-api-access-psprr\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556866 kubelet[2402]: I0412 18:31:10.556855 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hostproc\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.556944 kubelet[2402]: I0412 18:31:10.556935 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f85174c-79b1-4e8d-a606-b1c1283bf5be-clustermesh-secrets\") pod \"cilium-zxwbm\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " pod="kube-system/cilium-zxwbm" Apr 12 18:31:10.557017 kubelet[2402]: I0412 18:31:10.557008 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cab4bec0-740d-47c2-9847-c4388e57b8dd-lib-modules\") pod \"kube-proxy-k9h5h\" (UID: \"cab4bec0-740d-47c2-9847-c4388e57b8dd\") " pod="kube-system/kube-proxy-k9h5h" Apr 12 18:31:10.767725 env[1311]: time="2024-04-12T18:31:10.767047737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9h5h,Uid:cab4bec0-740d-47c2-9847-c4388e57b8dd,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:10.817640 env[1311]: time="2024-04-12T18:31:10.817578008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:10.817820 env[1311]: time="2024-04-12T18:31:10.817797447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:10.817906 env[1311]: time="2024-04-12T18:31:10.817885606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:10.818230 env[1311]: time="2024-04-12T18:31:10.818194684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e87bf4d74a8d77c2b776c19f43f3a7778bb64ffe415dd2135a3e242e49146c9 pid=2477 runtime=io.containerd.runc.v2 Apr 12 18:31:10.830533 systemd[1]: Started cri-containerd-5e87bf4d74a8d77c2b776c19f43f3a7778bb64ffe415dd2135a3e242e49146c9.scope. Apr 12 18:31:10.880038 env[1311]: time="2024-04-12T18:31:10.879998233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9h5h,Uid:cab4bec0-740d-47c2-9847-c4388e57b8dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e87bf4d74a8d77c2b776c19f43f3a7778bb64ffe415dd2135a3e242e49146c9\"" Apr 12 18:31:10.883218 env[1311]: time="2024-04-12T18:31:10.883185449Z" level=info msg="CreateContainer within sandbox \"5e87bf4d74a8d77c2b776c19f43f3a7778bb64ffe415dd2135a3e242e49146c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:31:10.915103 kubelet[2402]: I0412 18:31:10.914488 2402 topology_manager.go:215] "Topology Admit Handler" podUID="5c173a3d-a374-4583-85f1-df6517e82948" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-zdfvw" Apr 12 18:31:10.919336 systemd[1]: Created slice kubepods-besteffort-pod5c173a3d_a374_4583_85f1_df6517e82948.slice. Apr 12 18:31:10.947962 env[1311]: time="2024-04-12T18:31:10.947910697Z" level=info msg="CreateContainer within sandbox \"5e87bf4d74a8d77c2b776c19f43f3a7778bb64ffe415dd2135a3e242e49146c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"106ae26fe8a723de77d05655a5049571aece439964786d19ff9e4e151eaf375e\"" Apr 12 18:31:10.949227 env[1311]: time="2024-04-12T18:31:10.949203967Z" level=info msg="StartContainer for \"106ae26fe8a723de77d05655a5049571aece439964786d19ff9e4e151eaf375e\"" Apr 12 18:31:10.960815 kubelet[2402]: I0412 18:31:10.960702 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c173a3d-a374-4583-85f1-df6517e82948-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-zdfvw\" (UID: \"5c173a3d-a374-4583-85f1-df6517e82948\") " pod="kube-system/cilium-operator-6bc8ccdb58-zdfvw" Apr 12 18:31:10.960815 kubelet[2402]: I0412 18:31:10.960759 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxdj\" (UniqueName: \"kubernetes.io/projected/5c173a3d-a374-4583-85f1-df6517e82948-kube-api-access-xpxdj\") pod \"cilium-operator-6bc8ccdb58-zdfvw\" (UID: \"5c173a3d-a374-4583-85f1-df6517e82948\") " pod="kube-system/cilium-operator-6bc8ccdb58-zdfvw" Apr 12 18:31:10.969043 systemd[1]: Started cri-containerd-106ae26fe8a723de77d05655a5049571aece439964786d19ff9e4e151eaf375e.scope. Apr 12 18:31:11.001582 env[1311]: time="2024-04-12T18:31:11.001522745Z" level=info msg="StartContainer for \"106ae26fe8a723de77d05655a5049571aece439964786d19ff9e4e151eaf375e\" returns successfully" Apr 12 18:31:11.659027 kubelet[2402]: E0412 18:31:11.659005 2402 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:11.659263 kubelet[2402]: E0412 18:31:11.659249 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-config-path podName:9f85174c-79b1-4e8d-a606-b1c1283bf5be nodeName:}" failed. No retries permitted until 2024-04-12 18:31:12.159226187 +0000 UTC m=+15.789406474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-config-path") pod "cilium-zxwbm" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:11.671583 systemd[1]: run-containerd-runc-k8s.io-5e87bf4d74a8d77c2b776c19f43f3a7778bb64ffe415dd2135a3e242e49146c9-runc.4JkLWs.mount: Deactivated successfully. Apr 12 18:31:11.826653 env[1311]: time="2024-04-12T18:31:11.826602546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-zdfvw,Uid:5c173a3d-a374-4583-85f1-df6517e82948,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:11.909917 env[1311]: time="2024-04-12T18:31:11.909686270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:11.909917 env[1311]: time="2024-04-12T18:31:11.909724349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:11.909917 env[1311]: time="2024-04-12T18:31:11.909735109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:11.910390 env[1311]: time="2024-04-12T18:31:11.909899308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f pid=2677 runtime=io.containerd.runc.v2 Apr 12 18:31:11.922174 systemd[1]: Started cri-containerd-686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f.scope. Apr 12 18:31:11.958922 env[1311]: time="2024-04-12T18:31:11.958874317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-zdfvw,Uid:5c173a3d-a374-4583-85f1-df6517e82948,Namespace:kube-system,Attempt:0,} returns sandbox id \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\"" Apr 12 18:31:11.962297 env[1311]: time="2024-04-12T18:31:11.962263732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:31:12.274012 env[1311]: time="2024-04-12T18:31:12.273910530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zxwbm,Uid:9f85174c-79b1-4e8d-a606-b1c1283bf5be,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:12.327710 env[1311]: time="2024-04-12T18:31:12.327632111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:12.327858 env[1311]: time="2024-04-12T18:31:12.327722831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:12.327858 env[1311]: time="2024-04-12T18:31:12.327751430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:12.328094 env[1311]: time="2024-04-12T18:31:12.328011989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535 pid=2719 runtime=io.containerd.runc.v2 Apr 12 18:31:12.338788 systemd[1]: Started cri-containerd-9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535.scope. Apr 12 18:31:12.361176 env[1311]: time="2024-04-12T18:31:12.361130155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zxwbm,Uid:9f85174c-79b1-4e8d-a606-b1c1283bf5be,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\"" Apr 12 18:31:12.670180 systemd[1]: run-containerd-runc-k8s.io-686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f-runc.fnbgif.mount: Deactivated successfully. Apr 12 18:31:13.584199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150244934.mount: Deactivated successfully. Apr 12 18:31:14.397804 env[1311]: time="2024-04-12T18:31:14.397751408Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:14.412942 env[1311]: time="2024-04-12T18:31:14.412896545Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:14.421174 env[1311]: time="2024-04-12T18:31:14.421129169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:14.421499 env[1311]: time="2024-04-12T18:31:14.421469126Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 12 18:31:14.423590 env[1311]: time="2024-04-12T18:31:14.422896877Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:31:14.424461 env[1311]: time="2024-04-12T18:31:14.424185668Z" level=info msg="CreateContainer within sandbox \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:31:14.488047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137068202.mount: Deactivated successfully. Apr 12 18:31:14.492479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492475341.mount: Deactivated successfully. Apr 12 18:31:14.514234 env[1311]: time="2024-04-12T18:31:14.514183935Z" level=info msg="CreateContainer within sandbox \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\"" Apr 12 18:31:14.514663 env[1311]: time="2024-04-12T18:31:14.514619852Z" level=info msg="StartContainer for \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\"" Apr 12 18:31:14.534611 systemd[1]: Started cri-containerd-2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114.scope. Apr 12 18:31:14.569304 env[1311]: time="2024-04-12T18:31:14.569258439Z" level=info msg="StartContainer for \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\" returns successfully" Apr 12 18:31:15.582454 kubelet[2402]: I0412 18:31:15.582423 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k9h5h" podStartSLOduration=5.58238796 podCreationTimestamp="2024-04-12 18:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:11.55973542 +0000 UTC m=+15.189915747" watchObservedRunningTime="2024-04-12 18:31:15.58238796 +0000 UTC m=+19.212568287" Apr 12 18:31:15.583123 kubelet[2402]: I0412 18:31:15.583105 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-zdfvw" podStartSLOduration=3.121373499 podCreationTimestamp="2024-04-12 18:31:10 +0000 UTC" firstStartedPulling="2024-04-12 18:31:11.960055668 +0000 UTC m=+15.590235955" lastFinishedPulling="2024-04-12 18:31:14.421763404 +0000 UTC m=+18.051943731" observedRunningTime="2024-04-12 18:31:15.581851763 +0000 UTC m=+19.212032090" watchObservedRunningTime="2024-04-12 18:31:15.583081275 +0000 UTC m=+19.213261602" Apr 12 18:31:20.797846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988497067.mount: Deactivated successfully. Apr 12 18:31:23.577339 env[1311]: time="2024-04-12T18:31:23.577298247Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:23.585212 env[1311]: time="2024-04-12T18:31:23.585181720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:23.590551 env[1311]: time="2024-04-12T18:31:23.590525808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:23.591273 env[1311]: time="2024-04-12T18:31:23.591248484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 12 18:31:23.593435 env[1311]: time="2024-04-12T18:31:23.593407631Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:31:23.640973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372405012.mount: Deactivated successfully. Apr 12 18:31:23.654211 env[1311]: time="2024-04-12T18:31:23.654164071Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\"" Apr 12 18:31:23.656088 env[1311]: time="2024-04-12T18:31:23.656042340Z" level=info msg="StartContainer for \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\"" Apr 12 18:31:23.676475 systemd[1]: Started cri-containerd-77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1.scope. Apr 12 18:31:23.709217 systemd[1]: cri-containerd-77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1.scope: Deactivated successfully. Apr 12 18:31:23.711522 env[1311]: time="2024-04-12T18:31:23.711463812Z" level=info msg="StartContainer for \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\" returns successfully" Apr 12 18:31:24.635384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1-rootfs.mount: Deactivated successfully. Apr 12 18:31:24.831769 env[1311]: time="2024-04-12T18:31:24.831718049Z" level=info msg="shim disconnected" id=77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1 Apr 12 18:31:24.831769 env[1311]: time="2024-04-12T18:31:24.831764569Z" level=warning msg="cleaning up after shim disconnected" id=77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1 namespace=k8s.io Apr 12 18:31:24.831769 env[1311]: time="2024-04-12T18:31:24.831774088Z" level=info msg="cleaning up dead shim" Apr 12 18:31:24.839179 env[1311]: time="2024-04-12T18:31:24.839132606Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2842 runtime=io.containerd.runc.v2\n" Apr 12 18:31:25.594108 env[1311]: time="2024-04-12T18:31:25.594046568Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:31:25.638261 env[1311]: time="2024-04-12T18:31:25.638215793Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\"" Apr 12 18:31:25.638861 env[1311]: time="2024-04-12T18:31:25.638831430Z" level=info msg="StartContainer for \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\"" Apr 12 18:31:25.662039 systemd[1]: Started cri-containerd-8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60.scope. Apr 12 18:31:25.688795 env[1311]: time="2024-04-12T18:31:25.688738583Z" level=info msg="StartContainer for \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\" returns successfully" Apr 12 18:31:25.697247 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:31:25.697452 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:31:25.697625 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:31:25.698985 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:31:25.701585 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:31:25.708999 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:31:25.710954 systemd[1]: cri-containerd-8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60.scope: Deactivated successfully. Apr 12 18:31:25.726999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60-rootfs.mount: Deactivated successfully. Apr 12 18:31:25.751132 env[1311]: time="2024-04-12T18:31:25.751088384Z" level=info msg="shim disconnected" id=8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60 Apr 12 18:31:25.751380 env[1311]: time="2024-04-12T18:31:25.751351102Z" level=warning msg="cleaning up after shim disconnected" id=8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60 namespace=k8s.io Apr 12 18:31:25.751439 env[1311]: time="2024-04-12T18:31:25.751427302Z" level=info msg="cleaning up dead shim" Apr 12 18:31:25.758341 env[1311]: time="2024-04-12T18:31:25.758306222Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2907 runtime=io.containerd.runc.v2\n" Apr 12 18:31:26.597604 env[1311]: time="2024-04-12T18:31:26.596128088Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:31:26.674542 env[1311]: time="2024-04-12T18:31:26.674491363Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\"" Apr 12 18:31:26.675478 env[1311]: time="2024-04-12T18:31:26.675443518Z" level=info msg="StartContainer for \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\"" Apr 12 18:31:26.694232 systemd[1]: run-containerd-runc-k8s.io-e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2-runc.PkoZ7U.mount: Deactivated successfully. Apr 12 18:31:26.696523 systemd[1]: Started cri-containerd-e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2.scope. Apr 12 18:31:26.725356 systemd[1]: cri-containerd-e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2.scope: Deactivated successfully. Apr 12 18:31:26.728723 env[1311]: time="2024-04-12T18:31:26.728507856Z" level=info msg="StartContainer for \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\" returns successfully" Apr 12 18:31:26.747449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2-rootfs.mount: Deactivated successfully. Apr 12 18:31:26.763307 env[1311]: time="2024-04-12T18:31:26.763262859Z" level=info msg="shim disconnected" id=e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2 Apr 12 18:31:26.763493 env[1311]: time="2024-04-12T18:31:26.763474498Z" level=warning msg="cleaning up after shim disconnected" id=e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2 namespace=k8s.io Apr 12 18:31:26.763573 env[1311]: time="2024-04-12T18:31:26.763560177Z" level=info msg="cleaning up dead shim" Apr 12 18:31:26.771306 env[1311]: time="2024-04-12T18:31:26.771265094Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2965 runtime=io.containerd.runc.v2\n" Apr 12 18:31:27.601657 env[1311]: time="2024-04-12T18:31:27.601566227Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:31:27.666182 env[1311]: time="2024-04-12T18:31:27.666053386Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\"" Apr 12 18:31:27.667529 env[1311]: time="2024-04-12T18:31:27.666685103Z" level=info msg="StartContainer for \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\"" Apr 12 18:31:27.683495 systemd[1]: Started cri-containerd-fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873.scope. Apr 12 18:31:27.707724 systemd[1]: cri-containerd-fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873.scope: Deactivated successfully. Apr 12 18:31:27.709463 env[1311]: time="2024-04-12T18:31:27.709323184Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f85174c_79b1_4e8d_a606_b1c1283bf5be.slice/cri-containerd-fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873.scope/memory.events\": no such file or directory" Apr 12 18:31:27.715777 env[1311]: time="2024-04-12T18:31:27.715735068Z" level=info msg="StartContainer for \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\" returns successfully" Apr 12 18:31:27.729606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873-rootfs.mount: Deactivated successfully. Apr 12 18:31:27.744648 env[1311]: time="2024-04-12T18:31:27.744606026Z" level=info msg="shim disconnected" id=fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873 Apr 12 18:31:27.744916 env[1311]: time="2024-04-12T18:31:27.744895305Z" level=warning msg="cleaning up after shim disconnected" id=fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873 namespace=k8s.io Apr 12 18:31:27.745003 env[1311]: time="2024-04-12T18:31:27.744989544Z" level=info msg="cleaning up dead shim" Apr 12 18:31:27.751999 env[1311]: time="2024-04-12T18:31:27.751970265Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3018 runtime=io.containerd.runc.v2\n" Apr 12 18:31:28.612082 env[1311]: time="2024-04-12T18:31:28.610298985Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:31:28.654552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438343259.mount: Deactivated successfully. Apr 12 18:31:28.655597 env[1311]: time="2024-04-12T18:31:28.655548455Z" level=info msg="CreateContainer within sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\"" Apr 12 18:31:28.657345 env[1311]: time="2024-04-12T18:31:28.657318085Z" level=info msg="StartContainer for \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\"" Apr 12 18:31:28.675858 systemd[1]: Started cri-containerd-4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e.scope. Apr 12 18:31:28.715708 env[1311]: time="2024-04-12T18:31:28.715668442Z" level=info msg="StartContainer for \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\" returns successfully" Apr 12 18:31:28.795102 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:31:28.797406 kubelet[2402]: I0412 18:31:28.796592 2402 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 18:31:28.829368 kubelet[2402]: I0412 18:31:28.829329 2402 topology_manager.go:215] "Topology Admit Handler" podUID="fb6b6a11-8bd0-4e34-a592-b3a46130e2fc" podNamespace="kube-system" podName="coredns-5dd5756b68-q7hz8" Apr 12 18:31:28.831406 kubelet[2402]: I0412 18:31:28.831002 2402 topology_manager.go:215] "Topology Admit Handler" podUID="2c6a4375-891f-4cef-bd8f-7220b3f595c6" podNamespace="kube-system" podName="coredns-5dd5756b68-bvlj7" Apr 12 18:31:28.834538 systemd[1]: Created slice kubepods-burstable-podfb6b6a11_8bd0_4e34_a592_b3a46130e2fc.slice. Apr 12 18:31:28.839945 systemd[1]: Created slice kubepods-burstable-pod2c6a4375_891f_4cef_bd8f_7220b3f595c6.slice. Apr 12 18:31:28.843794 kubelet[2402]: W0412 18:31:28.843763 2402 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:28.843794 kubelet[2402]: E0412 18:31:28.843797 2402 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:31:28.868090 kubelet[2402]: I0412 18:31:28.867980 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlsct\" (UniqueName: \"kubernetes.io/projected/fb6b6a11-8bd0-4e34-a592-b3a46130e2fc-kube-api-access-vlsct\") pod \"coredns-5dd5756b68-q7hz8\" (UID: \"fb6b6a11-8bd0-4e34-a592-b3a46130e2fc\") " pod="kube-system/coredns-5dd5756b68-q7hz8" Apr 12 18:31:28.868090 kubelet[2402]: I0412 18:31:28.868053 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzsj\" (UniqueName: \"kubernetes.io/projected/2c6a4375-891f-4cef-bd8f-7220b3f595c6-kube-api-access-nqzsj\") pod \"coredns-5dd5756b68-bvlj7\" (UID: \"2c6a4375-891f-4cef-bd8f-7220b3f595c6\") " pod="kube-system/coredns-5dd5756b68-bvlj7" Apr 12 18:31:28.868241 kubelet[2402]: I0412 18:31:28.868103 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb6b6a11-8bd0-4e34-a592-b3a46130e2fc-config-volume\") pod \"coredns-5dd5756b68-q7hz8\" (UID: \"fb6b6a11-8bd0-4e34-a592-b3a46130e2fc\") " pod="kube-system/coredns-5dd5756b68-q7hz8" Apr 12 18:31:28.868241 kubelet[2402]: I0412 18:31:28.868123 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c6a4375-891f-4cef-bd8f-7220b3f595c6-config-volume\") pod \"coredns-5dd5756b68-bvlj7\" (UID: \"2c6a4375-891f-4cef-bd8f-7220b3f595c6\") " pod="kube-system/coredns-5dd5756b68-bvlj7" Apr 12 18:31:29.289095 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:31:29.618892 kubelet[2402]: I0412 18:31:29.618848 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zxwbm" podStartSLOduration=8.388965685 podCreationTimestamp="2024-04-12 18:31:10 +0000 UTC" firstStartedPulling="2024-04-12 18:31:12.362249147 +0000 UTC m=+15.992429474" lastFinishedPulling="2024-04-12 18:31:23.592095759 +0000 UTC m=+27.222276086" observedRunningTime="2024-04-12 18:31:29.618742778 +0000 UTC m=+33.248923105" watchObservedRunningTime="2024-04-12 18:31:29.618812297 +0000 UTC m=+33.248992624" Apr 12 18:31:29.656274 systemd[1]: run-containerd-runc-k8s.io-4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e-runc.KvX7aL.mount: Deactivated successfully. Apr 12 18:31:30.038860 env[1311]: time="2024-04-12T18:31:30.038208013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7hz8,Uid:fb6b6a11-8bd0-4e34-a592-b3a46130e2fc,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:30.042873 env[1311]: time="2024-04-12T18:31:30.042837468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bvlj7,Uid:2c6a4375-891f-4cef-bd8f-7220b3f595c6,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:30.922858 systemd-networkd[1460]: cilium_host: Link UP Apr 12 18:31:30.923612 systemd-networkd[1460]: cilium_net: Link UP Apr 12 18:31:30.938187 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:31:30.938265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:31:30.939750 systemd-networkd[1460]: cilium_net: Gained carrier Apr 12 18:31:30.940047 systemd-networkd[1460]: cilium_host: Gained carrier Apr 12 18:31:31.052227 systemd-networkd[1460]: cilium_host: Gained IPv6LL Apr 12 18:31:31.090612 systemd-networkd[1460]: cilium_vxlan: Link UP Apr 12 18:31:31.090617 systemd-networkd[1460]: cilium_vxlan: Gained carrier Apr 12 18:31:31.347081 kernel: NET: Registered PF_ALG protocol family Apr 12 18:31:31.692294 systemd-networkd[1460]: cilium_net: Gained IPv6LL Apr 12 18:31:32.052608 systemd-networkd[1460]: lxc_health: Link UP Apr 12 18:31:32.063516 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:31:32.063026 systemd-networkd[1460]: lxc_health: Gained carrier Apr 12 18:31:32.655994 systemd-networkd[1460]: lxc82f0ff2e3950: Link UP Apr 12 18:31:32.670111 kernel: eth0: renamed from tmpcce0e Apr 12 18:31:32.685586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc82f0ff2e3950: link becomes ready Apr 12 18:31:32.685355 systemd-networkd[1460]: lxc82f0ff2e3950: Gained carrier Apr 12 18:31:32.689023 systemd-networkd[1460]: lxc944ace7fe987: Link UP Apr 12 18:31:32.706446 kernel: eth0: renamed from tmp1f216 Apr 12 18:31:32.718437 systemd-networkd[1460]: lxc944ace7fe987: Gained carrier Apr 12 18:31:32.719181 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc944ace7fe987: link becomes ready Apr 12 18:31:32.844283 systemd-networkd[1460]: cilium_vxlan: Gained IPv6LL Apr 12 18:31:33.869255 systemd-networkd[1460]: lxc82f0ff2e3950: Gained IPv6LL Apr 12 18:31:34.060210 systemd-networkd[1460]: lxc_health: Gained IPv6LL Apr 12 18:31:34.061197 systemd-networkd[1460]: lxc944ace7fe987: Gained IPv6LL Apr 12 18:31:36.195728 env[1311]: time="2024-04-12T18:31:36.195556476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:36.195728 env[1311]: time="2024-04-12T18:31:36.195593756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:36.195728 env[1311]: time="2024-04-12T18:31:36.195603755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:36.196093 env[1311]: time="2024-04-12T18:31:36.195748635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f216eea5721a97093e3902984a7137a71f2644d6b6635b67c7c7406d661e729 pid=3565 runtime=io.containerd.runc.v2 Apr 12 18:31:36.213530 env[1311]: time="2024-04-12T18:31:36.210861879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:36.213530 env[1311]: time="2024-04-12T18:31:36.210949879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:36.213530 env[1311]: time="2024-04-12T18:31:36.210978158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:36.213530 env[1311]: time="2024-04-12T18:31:36.211306677Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cce0e43546a75b1c96f11d73bc9705ee2b12f5e99002787e9dc04839ce8c0726 pid=3585 runtime=io.containerd.runc.v2 Apr 12 18:31:36.218442 systemd[1]: run-containerd-runc-k8s.io-1f216eea5721a97093e3902984a7137a71f2644d6b6635b67c7c7406d661e729-runc.WUUqaw.mount: Deactivated successfully. Apr 12 18:31:36.227519 systemd[1]: Started cri-containerd-1f216eea5721a97093e3902984a7137a71f2644d6b6635b67c7c7406d661e729.scope. Apr 12 18:31:36.261236 systemd[1]: Started cri-containerd-cce0e43546a75b1c96f11d73bc9705ee2b12f5e99002787e9dc04839ce8c0726.scope. Apr 12 18:31:36.276779 env[1311]: time="2024-04-12T18:31:36.276728749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q7hz8,Uid:fb6b6a11-8bd0-4e34-a592-b3a46130e2fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f216eea5721a97093e3902984a7137a71f2644d6b6635b67c7c7406d661e729\"" Apr 12 18:31:36.282305 env[1311]: time="2024-04-12T18:31:36.282268762Z" level=info msg="CreateContainer within sandbox \"1f216eea5721a97093e3902984a7137a71f2644d6b6635b67c7c7406d661e729\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:31:36.309028 env[1311]: time="2024-04-12T18:31:36.308961268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bvlj7,Uid:2c6a4375-891f-4cef-bd8f-7220b3f595c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"cce0e43546a75b1c96f11d73bc9705ee2b12f5e99002787e9dc04839ce8c0726\"" Apr 12 18:31:36.315947 env[1311]: time="2024-04-12T18:31:36.315893673Z" level=info msg="CreateContainer within sandbox \"cce0e43546a75b1c96f11d73bc9705ee2b12f5e99002787e9dc04839ce8c0726\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:31:36.338169 env[1311]: time="2024-04-12T18:31:36.338117562Z" level=info msg="CreateContainer within sandbox \"1f216eea5721a97093e3902984a7137a71f2644d6b6635b67c7c7406d661e729\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0de60017c94036ac89d7ad2a601bc83a2f1b4c757e0379a642d2032a3939022\"" Apr 12 18:31:36.340321 env[1311]: time="2024-04-12T18:31:36.340289271Z" level=info msg="StartContainer for \"a0de60017c94036ac89d7ad2a601bc83a2f1b4c757e0379a642d2032a3939022\"" Apr 12 18:31:36.366183 systemd[1]: Started cri-containerd-a0de60017c94036ac89d7ad2a601bc83a2f1b4c757e0379a642d2032a3939022.scope. Apr 12 18:31:36.399088 env[1311]: time="2024-04-12T18:31:36.397996822Z" level=info msg="CreateContainer within sandbox \"cce0e43546a75b1c96f11d73bc9705ee2b12f5e99002787e9dc04839ce8c0726\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61571b2eae1af28b5763ace9d33b0b13ce3027618d0e71fe2c34812bb8bf3568\"" Apr 12 18:31:36.399088 env[1311]: time="2024-04-12T18:31:36.398715458Z" level=info msg="StartContainer for \"61571b2eae1af28b5763ace9d33b0b13ce3027618d0e71fe2c34812bb8bf3568\"" Apr 12 18:31:36.410551 env[1311]: time="2024-04-12T18:31:36.410503159Z" level=info msg="StartContainer for \"a0de60017c94036ac89d7ad2a601bc83a2f1b4c757e0379a642d2032a3939022\" returns successfully" Apr 12 18:31:36.422807 systemd[1]: Started cri-containerd-61571b2eae1af28b5763ace9d33b0b13ce3027618d0e71fe2c34812bb8bf3568.scope. Apr 12 18:31:36.459559 env[1311]: time="2024-04-12T18:31:36.459446234Z" level=info msg="StartContainer for \"61571b2eae1af28b5763ace9d33b0b13ce3027618d0e71fe2c34812bb8bf3568\" returns successfully" Apr 12 18:31:36.644407 kubelet[2402]: I0412 18:31:36.644378 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q7hz8" podStartSLOduration=26.644342308 podCreationTimestamp="2024-04-12 18:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:36.635351073 +0000 UTC m=+40.265531400" watchObservedRunningTime="2024-04-12 18:31:36.644342308 +0000 UTC m=+40.274522595" Apr 12 18:31:37.199743 systemd[1]: run-containerd-runc-k8s.io-cce0e43546a75b1c96f11d73bc9705ee2b12f5e99002787e9dc04839ce8c0726-runc.6bw4mS.mount: Deactivated successfully. Apr 12 18:33:52.426888 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.12.6:36494.service. Apr 12 18:33:52.855204 sshd[3745]: Accepted publickey for core from 10.200.12.6 port 36494 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:33:52.856966 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:33:52.862132 systemd[1]: Started session-8.scope. Apr 12 18:33:52.862675 systemd-logind[1301]: New session 8 of user core. Apr 12 18:33:53.318846 sshd[3745]: pam_unix(sshd:session): session closed for user core Apr 12 18:33:53.322513 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:33:53.323583 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:33:53.323686 systemd[1]: sshd@5-10.200.20.15:22-10.200.12.6:36494.service: Deactivated successfully. Apr 12 18:33:53.325037 systemd-logind[1301]: Removed session 8. Apr 12 18:33:58.386204 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.12.6:41980.service. Apr 12 18:33:58.783047 sshd[3761]: Accepted publickey for core from 10.200.12.6 port 41980 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:33:58.784731 sshd[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:33:58.789015 systemd[1]: Started session-9.scope. Apr 12 18:33:58.790339 systemd-logind[1301]: New session 9 of user core. Apr 12 18:33:59.140484 sshd[3761]: pam_unix(sshd:session): session closed for user core Apr 12 18:33:59.142905 systemd[1]: sshd@6-10.200.20.15:22-10.200.12.6:41980.service: Deactivated successfully. Apr 12 18:33:59.143660 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:33:59.144753 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:33:59.145503 systemd-logind[1301]: Removed session 9. Apr 12 18:34:04.209423 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.12.6:41994.service. Apr 12 18:34:04.616531 sshd[3775]: Accepted publickey for core from 10.200.12.6 port 41994 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:04.618155 sshd[3775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:04.622601 systemd-logind[1301]: New session 10 of user core. Apr 12 18:34:04.623201 systemd[1]: Started session-10.scope. Apr 12 18:34:04.983275 sshd[3775]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:04.986159 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:34:04.986883 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:34:04.986990 systemd[1]: sshd@7-10.200.20.15:22-10.200.12.6:41994.service: Deactivated successfully. Apr 12 18:34:04.988178 systemd-logind[1301]: Removed session 10. Apr 12 18:34:10.050030 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.12.6:53568.service. Apr 12 18:34:10.445208 sshd[3790]: Accepted publickey for core from 10.200.12.6 port 53568 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:10.446570 sshd[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:10.451182 systemd[1]: Started session-11.scope. Apr 12 18:34:10.452169 systemd-logind[1301]: New session 11 of user core. Apr 12 18:34:10.812319 sshd[3790]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:10.815462 systemd[1]: sshd@8-10.200.20.15:22-10.200.12.6:53568.service: Deactivated successfully. Apr 12 18:34:10.816237 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:34:10.817155 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:34:10.817954 systemd-logind[1301]: Removed session 11. Apr 12 18:34:10.881978 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.12.6:53578.service. Apr 12 18:34:11.285438 sshd[3802]: Accepted publickey for core from 10.200.12.6 port 53578 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:11.287099 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:11.291328 systemd[1]: Started session-12.scope. Apr 12 18:34:11.292004 systemd-logind[1301]: New session 12 of user core. Apr 12 18:34:12.246482 sshd[3802]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:12.249678 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:34:12.249832 systemd[1]: sshd@9-10.200.20.15:22-10.200.12.6:53578.service: Deactivated successfully. Apr 12 18:34:12.250532 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:34:12.251277 systemd-logind[1301]: Removed session 12. Apr 12 18:34:12.314015 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.12.6:53594.service. Apr 12 18:34:12.711155 sshd[3814]: Accepted publickey for core from 10.200.12.6 port 53594 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:12.712789 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:12.716222 systemd-logind[1301]: New session 13 of user core. Apr 12 18:34:12.717777 systemd[1]: Started session-13.scope. Apr 12 18:34:13.067386 sshd[3814]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:13.071206 systemd[1]: sshd@10-10.200.20.15:22-10.200.12.6:53594.service: Deactivated successfully. Apr 12 18:34:13.071957 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:34:13.073274 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:34:13.074134 systemd-logind[1301]: Removed session 13. Apr 12 18:34:18.140835 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.12.6:40978.service. Apr 12 18:34:18.569779 sshd[3826]: Accepted publickey for core from 10.200.12.6 port 40978 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:18.570930 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:18.575226 systemd[1]: Started session-14.scope. Apr 12 18:34:18.575789 systemd-logind[1301]: New session 14 of user core. Apr 12 18:34:18.946263 sshd[3826]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:18.948708 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:34:18.948894 systemd[1]: sshd@11-10.200.20.15:22-10.200.12.6:40978.service: Deactivated successfully. Apr 12 18:34:18.949638 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:34:18.950383 systemd-logind[1301]: Removed session 14. Apr 12 18:34:24.013805 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.12.6:40984.service. Apr 12 18:34:24.407961 sshd[3838]: Accepted publickey for core from 10.200.12.6 port 40984 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:24.409555 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:24.413816 systemd[1]: Started session-15.scope. Apr 12 18:34:24.414283 systemd-logind[1301]: New session 15 of user core. Apr 12 18:34:24.765158 sshd[3838]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:24.768389 systemd[1]: sshd@12-10.200.20.15:22-10.200.12.6:40984.service: Deactivated successfully. Apr 12 18:34:24.769150 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:34:24.769685 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:34:24.770366 systemd-logind[1301]: Removed session 15. Apr 12 18:34:24.837435 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.12.6:40992.service. Apr 12 18:34:25.265676 sshd[3849]: Accepted publickey for core from 10.200.12.6 port 40992 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:25.267256 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:25.271493 systemd[1]: Started session-16.scope. Apr 12 18:34:25.272008 systemd-logind[1301]: New session 16 of user core. Apr 12 18:34:25.706036 sshd[3849]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:25.709077 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:34:25.710390 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:34:25.711516 systemd-logind[1301]: Removed session 16. Apr 12 18:34:25.711908 systemd[1]: sshd@13-10.200.20.15:22-10.200.12.6:40992.service: Deactivated successfully. Apr 12 18:34:25.773172 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.12.6:44020.service. Apr 12 18:34:26.170735 sshd[3858]: Accepted publickey for core from 10.200.12.6 port 44020 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:26.172386 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:26.176222 systemd-logind[1301]: New session 17 of user core. Apr 12 18:34:26.176670 systemd[1]: Started session-17.scope. Apr 12 18:34:27.148486 sshd[3858]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:27.151327 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:34:27.152214 systemd[1]: sshd@14-10.200.20.15:22-10.200.12.6:44020.service: Deactivated successfully. Apr 12 18:34:27.152925 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:34:27.153431 systemd-logind[1301]: Removed session 17. Apr 12 18:34:27.215912 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.12.6:44026.service. Apr 12 18:34:27.612077 sshd[3875]: Accepted publickey for core from 10.200.12.6 port 44026 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:27.613613 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:27.617983 systemd[1]: Started session-18.scope. Apr 12 18:34:27.618872 systemd-logind[1301]: New session 18 of user core. Apr 12 18:34:28.115305 sshd[3875]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:28.118005 systemd[1]: sshd@15-10.200.20.15:22-10.200.12.6:44026.service: Deactivated successfully. Apr 12 18:34:28.118755 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:34:28.119320 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:34:28.120327 systemd-logind[1301]: Removed session 18. Apr 12 18:34:28.183474 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.12.6:44032.service. Apr 12 18:34:28.586675 sshd[3884]: Accepted publickey for core from 10.200.12.6 port 44032 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:28.587924 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:28.592279 systemd[1]: Started session-19.scope. Apr 12 18:34:28.592583 systemd-logind[1301]: New session 19 of user core. Apr 12 18:34:28.946905 sshd[3884]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:28.950099 systemd[1]: sshd@16-10.200.20.15:22-10.200.12.6:44032.service: Deactivated successfully. Apr 12 18:34:28.950113 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:34:28.950810 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:34:28.951591 systemd-logind[1301]: Removed session 19. Apr 12 18:34:34.019738 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.12.6:44042.service. Apr 12 18:34:34.448175 sshd[3899]: Accepted publickey for core from 10.200.12.6 port 44042 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:34.449936 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:34.454050 systemd[1]: Started session-20.scope. Apr 12 18:34:34.455211 systemd-logind[1301]: New session 20 of user core. Apr 12 18:34:34.815963 sshd[3899]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:34.818786 systemd[1]: sshd@17-10.200.20.15:22-10.200.12.6:44042.service: Deactivated successfully. Apr 12 18:34:34.819545 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:34:34.820167 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:34:34.821029 systemd-logind[1301]: Removed session 20. Apr 12 18:34:39.884420 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.12.6:48440.service. Apr 12 18:34:40.286201 sshd[3915]: Accepted publickey for core from 10.200.12.6 port 48440 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:40.288019 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:40.292311 systemd[1]: Started session-21.scope. Apr 12 18:34:40.293143 systemd-logind[1301]: New session 21 of user core. Apr 12 18:34:40.643853 sshd[3915]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:40.646247 systemd[1]: sshd@18-10.200.20.15:22-10.200.12.6:48440.service: Deactivated successfully. Apr 12 18:34:40.646982 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:34:40.647573 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:34:40.648310 systemd-logind[1301]: Removed session 21. Apr 12 18:34:45.716911 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.12.6:35882.service. Apr 12 18:34:46.147302 sshd[3928]: Accepted publickey for core from 10.200.12.6 port 35882 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:46.148577 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:46.153009 systemd[1]: Started session-22.scope. Apr 12 18:34:46.153604 systemd-logind[1301]: New session 22 of user core. Apr 12 18:34:46.515588 sshd[3928]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:46.518683 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:34:46.518817 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:34:46.519683 systemd[1]: sshd@19-10.200.20.15:22-10.200.12.6:35882.service: Deactivated successfully. Apr 12 18:34:46.520961 systemd-logind[1301]: Removed session 22. Apr 12 18:34:46.583626 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.12.6:35896.service. Apr 12 18:34:46.979607 sshd[3940]: Accepted publickey for core from 10.200.12.6 port 35896 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:46.982030 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:46.986951 systemd[1]: Started session-23.scope. Apr 12 18:34:46.987680 systemd-logind[1301]: New session 23 of user core. Apr 12 18:34:49.017911 kubelet[2402]: I0412 18:34:49.017876 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bvlj7" podStartSLOduration=219.017838951 podCreationTimestamp="2024-04-12 18:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:36.670722656 +0000 UTC m=+40.300902983" watchObservedRunningTime="2024-04-12 18:34:49.017838951 +0000 UTC m=+232.648019278" Apr 12 18:34:49.034568 systemd[1]: run-containerd-runc-k8s.io-4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e-runc.3BOMIr.mount: Deactivated successfully. Apr 12 18:34:49.048931 env[1311]: time="2024-04-12T18:34:49.048890310Z" level=info msg="StopContainer for \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\" with timeout 30 (s)" Apr 12 18:34:49.049815 env[1311]: time="2024-04-12T18:34:49.049782943Z" level=info msg="Stop container \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\" with signal terminated" Apr 12 18:34:49.062731 env[1311]: time="2024-04-12T18:34:49.062672283Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:34:49.063076 systemd[1]: cri-containerd-2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114.scope: Deactivated successfully. Apr 12 18:34:49.072704 env[1311]: time="2024-04-12T18:34:49.072656405Z" level=info msg="StopContainer for \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\" with timeout 2 (s)" Apr 12 18:34:49.073139 env[1311]: time="2024-04-12T18:34:49.073046482Z" level=info msg="Stop container \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\" with signal terminated" Apr 12 18:34:49.079978 systemd-networkd[1460]: lxc_health: Link DOWN Apr 12 18:34:49.079984 systemd-networkd[1460]: lxc_health: Lost carrier Apr 12 18:34:49.088032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114-rootfs.mount: Deactivated successfully. Apr 12 18:34:49.104578 systemd[1]: cri-containerd-4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e.scope: Deactivated successfully. Apr 12 18:34:49.104903 systemd[1]: cri-containerd-4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e.scope: Consumed 6.283s CPU time. Apr 12 18:34:49.123959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e-rootfs.mount: Deactivated successfully. Apr 12 18:34:49.219209 env[1311]: time="2024-04-12T18:34:49.219167428Z" level=info msg="shim disconnected" id=2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114 Apr 12 18:34:49.219565 env[1311]: time="2024-04-12T18:34:49.219543265Z" level=warning msg="cleaning up after shim disconnected" id=2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114 namespace=k8s.io Apr 12 18:34:49.219648 env[1311]: time="2024-04-12T18:34:49.219635224Z" level=info msg="cleaning up dead shim" Apr 12 18:34:49.220009 env[1311]: time="2024-04-12T18:34:49.219977862Z" level=info msg="shim disconnected" id=4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e Apr 12 18:34:49.220222 env[1311]: time="2024-04-12T18:34:49.220202740Z" level=warning msg="cleaning up after shim disconnected" id=4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e namespace=k8s.io Apr 12 18:34:49.220307 env[1311]: time="2024-04-12T18:34:49.220293619Z" level=info msg="cleaning up dead shim" Apr 12 18:34:49.227213 env[1311]: time="2024-04-12T18:34:49.227178446Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4013 runtime=io.containerd.runc.v2\n" Apr 12 18:34:49.228354 env[1311]: time="2024-04-12T18:34:49.228327917Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4012 runtime=io.containerd.runc.v2\n" Apr 12 18:34:49.246139 env[1311]: time="2024-04-12T18:34:49.246095059Z" level=info msg="StopContainer for \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\" returns successfully" Apr 12 18:34:49.246830 env[1311]: time="2024-04-12T18:34:49.246805773Z" level=info msg="StopPodSandbox for \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\"" Apr 12 18:34:49.246973 env[1311]: time="2024-04-12T18:34:49.246954412Z" level=info msg="Container to stop \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:49.247039 env[1311]: time="2024-04-12T18:34:49.247024012Z" level=info msg="Container to stop \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:49.247199 env[1311]: time="2024-04-12T18:34:49.247180490Z" level=info msg="Container to stop \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:49.247286 env[1311]: time="2024-04-12T18:34:49.247269770Z" level=info msg="Container to stop \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:49.247367 env[1311]: time="2024-04-12T18:34:49.247350209Z" level=info msg="Container to stop \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:49.248882 env[1311]: time="2024-04-12T18:34:49.248841077Z" level=info msg="StopContainer for \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\" returns successfully" Apr 12 18:34:49.249391 env[1311]: time="2024-04-12T18:34:49.249356713Z" level=info msg="StopPodSandbox for \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\"" Apr 12 18:34:49.249462 env[1311]: time="2024-04-12T18:34:49.249411233Z" level=info msg="Container to stop \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:49.253341 systemd[1]: cri-containerd-9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535.scope: Deactivated successfully. Apr 12 18:34:49.255964 systemd[1]: cri-containerd-686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f.scope: Deactivated successfully. Apr 12 18:34:49.291562 env[1311]: time="2024-04-12T18:34:49.291446307Z" level=info msg="shim disconnected" id=686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f Apr 12 18:34:49.291927 env[1311]: time="2024-04-12T18:34:49.291906183Z" level=warning msg="cleaning up after shim disconnected" id=686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f namespace=k8s.io Apr 12 18:34:49.292010 env[1311]: time="2024-04-12T18:34:49.291994422Z" level=info msg="cleaning up dead shim" Apr 12 18:34:49.292740 env[1311]: time="2024-04-12T18:34:49.291873503Z" level=info msg="shim disconnected" id=9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535 Apr 12 18:34:49.292740 env[1311]: time="2024-04-12T18:34:49.292740297Z" level=warning msg="cleaning up after shim disconnected" id=9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535 namespace=k8s.io Apr 12 18:34:49.292856 env[1311]: time="2024-04-12T18:34:49.292750536Z" level=info msg="cleaning up dead shim" Apr 12 18:34:49.301896 env[1311]: time="2024-04-12T18:34:49.301842866Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4079 runtime=io.containerd.runc.v2\n" Apr 12 18:34:49.302284 env[1311]: time="2024-04-12T18:34:49.302250343Z" level=info msg="TearDown network for sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" successfully" Apr 12 18:34:49.302284 env[1311]: time="2024-04-12T18:34:49.302278542Z" level=info msg="StopPodSandbox for \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" returns successfully" Apr 12 18:34:49.303814 env[1311]: time="2024-04-12T18:34:49.303784771Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4078 runtime=io.containerd.runc.v2\n" Apr 12 18:34:49.304348 env[1311]: time="2024-04-12T18:34:49.304319767Z" level=info msg="TearDown network for sandbox \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" successfully" Apr 12 18:34:49.304448 env[1311]: time="2024-04-12T18:34:49.304430966Z" level=info msg="StopPodSandbox for \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" returns successfully" Apr 12 18:34:49.427604 kubelet[2402]: I0412 18:34:49.427506 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-kernel\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427604 kubelet[2402]: I0412 18:34:49.427523 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.427604 kubelet[2402]: I0412 18:34:49.427570 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.427604 kubelet[2402]: I0412 18:34:49.427553 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-xtables-lock\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427841 kubelet[2402]: I0412 18:34:49.427611 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-config-path\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427841 kubelet[2402]: I0412 18:34:49.427643 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-etc-cni-netd\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427841 kubelet[2402]: I0412 18:34:49.427661 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-lib-modules\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427841 kubelet[2402]: I0412 18:34:49.427678 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hostproc\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427841 kubelet[2402]: I0412 18:34:49.427708 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f85174c-79b1-4e8d-a606-b1c1283bf5be-clustermesh-secrets\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427841 kubelet[2402]: I0412 18:34:49.427728 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-net\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427977 kubelet[2402]: I0412 18:34:49.427748 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psprr\" (UniqueName: \"kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-kube-api-access-psprr\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427977 kubelet[2402]: I0412 18:34:49.427766 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-bpf-maps\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427977 kubelet[2402]: I0412 18:34:49.427792 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cni-path\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427977 kubelet[2402]: I0412 18:34:49.427813 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hubble-tls\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427977 kubelet[2402]: I0412 18:34:49.427831 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-run\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.427977 kubelet[2402]: I0412 18:34:49.427865 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c173a3d-a374-4583-85f1-df6517e82948-cilium-config-path\") pod \"5c173a3d-a374-4583-85f1-df6517e82948\" (UID: \"5c173a3d-a374-4583-85f1-df6517e82948\") " Apr 12 18:34:49.428142 kubelet[2402]: I0412 18:34:49.427884 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-cgroup\") pod \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\" (UID: \"9f85174c-79b1-4e8d-a606-b1c1283bf5be\") " Apr 12 18:34:49.428142 kubelet[2402]: I0412 18:34:49.427904 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxdj\" (UniqueName: \"kubernetes.io/projected/5c173a3d-a374-4583-85f1-df6517e82948-kube-api-access-xpxdj\") pod \"5c173a3d-a374-4583-85f1-df6517e82948\" (UID: \"5c173a3d-a374-4583-85f1-df6517e82948\") " Apr 12 18:34:49.428142 kubelet[2402]: I0412 18:34:49.427948 2402 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-xtables-lock\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.428142 kubelet[2402]: I0412 18:34:49.427963 2402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.428303 kubelet[2402]: I0412 18:34:49.428282 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.428390 kubelet[2402]: I0412 18:34:49.428378 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cni-path" (OuterVolumeSpecName: "cni-path") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.428861 kubelet[2402]: I0412 18:34:49.428827 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.429811 kubelet[2402]: I0412 18:34:49.429779 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.431855 kubelet[2402]: I0412 18:34:49.431811 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.433179 kubelet[2402]: I0412 18:34:49.432233 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hostproc" (OuterVolumeSpecName: "hostproc") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.433179 kubelet[2402]: I0412 18:34:49.433173 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.433340 kubelet[2402]: I0412 18:34:49.433323 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:49.433458 kubelet[2402]: I0412 18:34:49.433431 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:49.433536 kubelet[2402]: I0412 18:34:49.433509 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-kube-api-access-psprr" (OuterVolumeSpecName: "kube-api-access-psprr") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "kube-api-access-psprr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:49.434737 kubelet[2402]: I0412 18:34:49.434702 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c173a3d-a374-4583-85f1-df6517e82948-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c173a3d-a374-4583-85f1-df6517e82948" (UID: "5c173a3d-a374-4583-85f1-df6517e82948"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:34:49.435346 kubelet[2402]: I0412 18:34:49.435289 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:34:49.435887 kubelet[2402]: I0412 18:34:49.435850 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c173a3d-a374-4583-85f1-df6517e82948-kube-api-access-xpxdj" (OuterVolumeSpecName: "kube-api-access-xpxdj") pod "5c173a3d-a374-4583-85f1-df6517e82948" (UID: "5c173a3d-a374-4583-85f1-df6517e82948"). InnerVolumeSpecName "kube-api-access-xpxdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:49.437301 kubelet[2402]: I0412 18:34:49.437279 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f85174c-79b1-4e8d-a606-b1c1283bf5be-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9f85174c-79b1-4e8d-a606-b1c1283bf5be" (UID: "9f85174c-79b1-4e8d-a606-b1c1283bf5be"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:34:49.528367 kubelet[2402]: I0412 18:34:49.528335 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c173a3d-a374-4583-85f1-df6517e82948-cilium-config-path\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.528585 kubelet[2402]: I0412 18:34:49.528573 2402 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hubble-tls\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.528661 kubelet[2402]: I0412 18:34:49.528652 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-run\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.528727 kubelet[2402]: I0412 18:34:49.528719 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-cgroup\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.528788 kubelet[2402]: I0412 18:34:49.528780 2402 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xpxdj\" (UniqueName: \"kubernetes.io/projected/5c173a3d-a374-4583-85f1-df6517e82948-kube-api-access-xpxdj\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.528863 kubelet[2402]: I0412 18:34:49.528854 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cilium-config-path\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.528925 kubelet[2402]: I0412 18:34:49.528916 2402 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f85174c-79b1-4e8d-a606-b1c1283bf5be-clustermesh-secrets\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.528992 kubelet[2402]: I0412 18:34:49.528983 2402 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-etc-cni-netd\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.529054 kubelet[2402]: I0412 18:34:49.529046 2402 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-lib-modules\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.529148 kubelet[2402]: I0412 18:34:49.529140 2402 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-hostproc\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.529223 kubelet[2402]: I0412 18:34:49.529214 2402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-host-proc-sys-net\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.529287 kubelet[2402]: I0412 18:34:49.529279 2402 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-psprr\" (UniqueName: \"kubernetes.io/projected/9f85174c-79b1-4e8d-a606-b1c1283bf5be-kube-api-access-psprr\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.529352 kubelet[2402]: I0412 18:34:49.529343 2402 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-bpf-maps\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.529416 kubelet[2402]: I0412 18:34:49.529407 2402 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f85174c-79b1-4e8d-a606-b1c1283bf5be-cni-path\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:49.960746 kubelet[2402]: I0412 18:34:49.960714 2402 scope.go:117] "RemoveContainer" containerID="4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e" Apr 12 18:34:49.964760 systemd[1]: Removed slice kubepods-burstable-pod9f85174c_79b1_4e8d_a606_b1c1283bf5be.slice. Apr 12 18:34:49.964841 systemd[1]: kubepods-burstable-pod9f85174c_79b1_4e8d_a606_b1c1283bf5be.slice: Consumed 6.364s CPU time. Apr 12 18:34:49.967439 env[1311]: time="2024-04-12T18:34:49.967126860Z" level=info msg="RemoveContainer for \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\"" Apr 12 18:34:49.971761 systemd[1]: Removed slice kubepods-besteffort-pod5c173a3d_a374_4583_85f1_df6517e82948.slice. Apr 12 18:34:49.980913 env[1311]: time="2024-04-12T18:34:49.980861154Z" level=info msg="RemoveContainer for \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\" returns successfully" Apr 12 18:34:49.981230 kubelet[2402]: I0412 18:34:49.981207 2402 scope.go:117] "RemoveContainer" containerID="fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873" Apr 12 18:34:49.982996 env[1311]: time="2024-04-12T18:34:49.982951337Z" level=info msg="RemoveContainer for \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\"" Apr 12 18:34:49.993901 env[1311]: time="2024-04-12T18:34:49.993825493Z" level=info msg="RemoveContainer for \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\" returns successfully" Apr 12 18:34:49.994192 kubelet[2402]: I0412 18:34:49.994174 2402 scope.go:117] "RemoveContainer" containerID="e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2" Apr 12 18:34:49.995557 env[1311]: time="2024-04-12T18:34:49.995520800Z" level=info msg="RemoveContainer for \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\"" Apr 12 18:34:50.003420 env[1311]: time="2024-04-12T18:34:50.003376339Z" level=info msg="RemoveContainer for \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\" returns successfully" Apr 12 18:34:50.003728 kubelet[2402]: I0412 18:34:50.003701 2402 scope.go:117] "RemoveContainer" containerID="8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60" Apr 12 18:34:50.005065 env[1311]: time="2024-04-12T18:34:50.004802288Z" level=info msg="RemoveContainer for \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\"" Apr 12 18:34:50.014400 env[1311]: time="2024-04-12T18:34:50.014303494Z" level=info msg="RemoveContainer for \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\" returns successfully" Apr 12 18:34:50.014604 kubelet[2402]: I0412 18:34:50.014539 2402 scope.go:117] "RemoveContainer" containerID="77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1" Apr 12 18:34:50.015857 env[1311]: time="2024-04-12T18:34:50.015636684Z" level=info msg="RemoveContainer for \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\"" Apr 12 18:34:50.025025 env[1311]: time="2024-04-12T18:34:50.024988132Z" level=info msg="RemoveContainer for \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\" returns successfully" Apr 12 18:34:50.025357 kubelet[2402]: I0412 18:34:50.025339 2402 scope.go:117] "RemoveContainer" containerID="4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e" Apr 12 18:34:50.025892 env[1311]: time="2024-04-12T18:34:50.025824845Z" level=error msg="ContainerStatus for \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\": not found" Apr 12 18:34:50.026083 kubelet[2402]: E0412 18:34:50.026047 2402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\": not found" containerID="4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e" Apr 12 18:34:50.026237 kubelet[2402]: I0412 18:34:50.026223 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e"} err="failed to get container status \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e09599455d25e932eb1ce6eed97d2a9832329c13b7b15cdf7dcb8b76996e95e\": not found" Apr 12 18:34:50.026303 kubelet[2402]: I0412 18:34:50.026293 2402 scope.go:117] "RemoveContainer" containerID="fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873" Apr 12 18:34:50.026529 env[1311]: time="2024-04-12T18:34:50.026482680Z" level=error msg="ContainerStatus for \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\": not found" Apr 12 18:34:50.026664 kubelet[2402]: E0412 18:34:50.026651 2402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\": not found" containerID="fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873" Apr 12 18:34:50.026747 kubelet[2402]: I0412 18:34:50.026738 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873"} err="failed to get container status \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdc444f917cf5b32af358646096e483be264cc60cb13b454b7dc26544e514873\": not found" Apr 12 18:34:50.026805 kubelet[2402]: I0412 18:34:50.026796 2402 scope.go:117] "RemoveContainer" containerID="e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2" Apr 12 18:34:50.027027 env[1311]: time="2024-04-12T18:34:50.026983556Z" level=error msg="ContainerStatus for \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\": not found" Apr 12 18:34:50.027178 kubelet[2402]: E0412 18:34:50.027166 2402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\": not found" containerID="e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2" Apr 12 18:34:50.027260 kubelet[2402]: I0412 18:34:50.027250 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2"} err="failed to get container status \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1f292e56d514208f76082f46ac815f5e7b78f80dc148861bed6a7b145661cb2\": not found" Apr 12 18:34:50.027322 kubelet[2402]: I0412 18:34:50.027313 2402 scope.go:117] "RemoveContainer" containerID="8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60" Apr 12 18:34:50.027554 env[1311]: time="2024-04-12T18:34:50.027512552Z" level=error msg="ContainerStatus for \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\": not found" Apr 12 18:34:50.027684 kubelet[2402]: E0412 18:34:50.027672 2402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\": not found" containerID="8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60" Apr 12 18:34:50.027762 kubelet[2402]: I0412 18:34:50.027752 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60"} err="failed to get container status \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dc4fe712eafb3893be4f329e6982197585ed53d4af9a30bfb43f98dae20aa60\": not found" Apr 12 18:34:50.027815 kubelet[2402]: I0412 18:34:50.027807 2402 scope.go:117] "RemoveContainer" containerID="77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1" Apr 12 18:34:50.028040 env[1311]: time="2024-04-12T18:34:50.027999789Z" level=error msg="ContainerStatus for \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\": not found" Apr 12 18:34:50.028198 kubelet[2402]: E0412 18:34:50.028186 2402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\": not found" containerID="77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1" Apr 12 18:34:50.028276 kubelet[2402]: I0412 18:34:50.028267 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1"} err="failed to get container status \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"77442961dcb73fea1c460f224e2f5040a29278e772a9fed635d181c521d90bc1\": not found" Apr 12 18:34:50.028333 kubelet[2402]: I0412 18:34:50.028324 2402 scope.go:117] "RemoveContainer" containerID="2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114" Apr 12 18:34:50.029746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535-rootfs.mount: Deactivated successfully. Apr 12 18:34:50.029838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535-shm.mount: Deactivated successfully. Apr 12 18:34:50.029894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f-rootfs.mount: Deactivated successfully. Apr 12 18:34:50.029946 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f-shm.mount: Deactivated successfully. Apr 12 18:34:50.030008 systemd[1]: var-lib-kubelet-pods-9f85174c\x2d79b1\x2d4e8d\x2da606\x2db1c1283bf5be-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:34:50.030053 systemd[1]: var-lib-kubelet-pods-9f85174c\x2d79b1\x2d4e8d\x2da606\x2db1c1283bf5be-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:34:50.030129 systemd[1]: var-lib-kubelet-pods-5c173a3d\x2da374\x2d4583\x2d85f1\x2ddf6517e82948-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxpxdj.mount: Deactivated successfully. Apr 12 18:34:50.030176 systemd[1]: var-lib-kubelet-pods-9f85174c\x2d79b1\x2d4e8d\x2da606\x2db1c1283bf5be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsprr.mount: Deactivated successfully. Apr 12 18:34:50.033905 env[1311]: time="2024-04-12T18:34:50.033869023Z" level=info msg="RemoveContainer for \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\"" Apr 12 18:34:50.049780 env[1311]: time="2024-04-12T18:34:50.049663221Z" level=info msg="RemoveContainer for \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\" returns successfully" Apr 12 18:34:50.050362 kubelet[2402]: I0412 18:34:50.050341 2402 scope.go:117] "RemoveContainer" containerID="2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114" Apr 12 18:34:50.050804 env[1311]: time="2024-04-12T18:34:50.050695533Z" level=error msg="ContainerStatus for \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\": not found" Apr 12 18:34:50.050953 kubelet[2402]: E0412 18:34:50.050930 2402 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\": not found" containerID="2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114" Apr 12 18:34:50.051006 kubelet[2402]: I0412 18:34:50.050969 2402 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114"} err="failed to get container status \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a143cd7e7baff3bce203cc8d41a76b53a650314b20a4902a29029da48733114\": not found" Apr 12 18:34:50.497414 kubelet[2402]: I0412 18:34:50.497381 2402 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5c173a3d-a374-4583-85f1-df6517e82948" path="/var/lib/kubelet/pods/5c173a3d-a374-4583-85f1-df6517e82948/volumes" Apr 12 18:34:50.497804 kubelet[2402]: I0412 18:34:50.497786 2402 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" path="/var/lib/kubelet/pods/9f85174c-79b1-4e8d-a606-b1c1283bf5be/volumes" Apr 12 18:34:51.055020 sshd[3940]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:51.057297 systemd[1]: sshd@20-10.200.20.15:22-10.200.12.6:35896.service: Deactivated successfully. Apr 12 18:34:51.058017 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:34:51.058225 systemd[1]: session-23.scope: Consumed 1.199s CPU time. Apr 12 18:34:51.058629 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:34:51.059644 systemd-logind[1301]: Removed session 23. Apr 12 18:34:51.122566 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.12.6:35904.service. Apr 12 18:34:51.519528 sshd[4114]: Accepted publickey for core from 10.200.12.6 port 35904 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:51.521162 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:51.524868 systemd-logind[1301]: New session 24 of user core. Apr 12 18:34:51.525362 systemd[1]: Started session-24.scope. Apr 12 18:34:51.615195 kubelet[2402]: E0412 18:34:51.615161 2402 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:34:53.473250 kubelet[2402]: I0412 18:34:53.473205 2402 topology_manager.go:215] "Topology Admit Handler" podUID="abe192ef-f07d-4a9b-b79f-f23256b00a25" podNamespace="kube-system" podName="cilium-4668m" Apr 12 18:34:53.473721 kubelet[2402]: E0412 18:34:53.473263 2402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" containerName="mount-cgroup" Apr 12 18:34:53.473721 kubelet[2402]: E0412 18:34:53.473274 2402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" containerName="cilium-agent" Apr 12 18:34:53.473721 kubelet[2402]: E0412 18:34:53.473281 2402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c173a3d-a374-4583-85f1-df6517e82948" containerName="cilium-operator" Apr 12 18:34:53.473721 kubelet[2402]: E0412 18:34:53.473290 2402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" containerName="apply-sysctl-overwrites" Apr 12 18:34:53.473721 kubelet[2402]: E0412 18:34:53.473296 2402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" containerName="mount-bpf-fs" Apr 12 18:34:53.473721 kubelet[2402]: E0412 18:34:53.473303 2402 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" containerName="clean-cilium-state" Apr 12 18:34:53.473721 kubelet[2402]: I0412 18:34:53.473324 2402 memory_manager.go:346] "RemoveStaleState removing state" podUID="5c173a3d-a374-4583-85f1-df6517e82948" containerName="cilium-operator" Apr 12 18:34:53.473721 kubelet[2402]: I0412 18:34:53.473355 2402 memory_manager.go:346] "RemoveStaleState removing state" podUID="9f85174c-79b1-4e8d-a606-b1c1283bf5be" containerName="cilium-agent" Apr 12 18:34:53.478584 systemd[1]: Created slice kubepods-burstable-podabe192ef_f07d_4a9b_b79f_f23256b00a25.slice. Apr 12 18:34:53.486539 kubelet[2402]: W0412 18:34:53.486486 2402 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:34:53.486716 kubelet[2402]: E0412 18:34:53.486703 2402 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:34:53.486893 kubelet[2402]: W0412 18:34:53.486877 2402 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:34:53.486978 kubelet[2402]: E0412 18:34:53.486966 2402 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.3-a-58e6b5da18" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-58e6b5da18' and this object Apr 12 18:34:53.497581 sshd[4114]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:53.500031 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:34:53.500326 systemd[1]: sshd@21-10.200.20.15:22-10.200.12.6:35904.service: Deactivated successfully. Apr 12 18:34:53.501024 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:34:53.501208 systemd[1]: session-24.scope: Consumed 1.636s CPU time. Apr 12 18:34:53.502166 systemd-logind[1301]: Removed session 24. Apr 12 18:34:53.570913 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.12.6:35912.service. Apr 12 18:34:53.650573 kubelet[2402]: I0412 18:34:53.650534 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-net\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.650748 kubelet[2402]: I0412 18:34:53.650737 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-run\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.650862 kubelet[2402]: I0412 18:34:53.650853 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-cgroup\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.650953 kubelet[2402]: I0412 18:34:53.650942 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cni-path\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651039 kubelet[2402]: I0412 18:34:53.651030 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-clustermesh-secrets\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651150 kubelet[2402]: I0412 18:34:53.651140 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-config-path\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651242 kubelet[2402]: I0412 18:34:53.651232 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-ipsec-secrets\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651340 kubelet[2402]: I0412 18:34:53.651328 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-etc-cni-netd\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651454 kubelet[2402]: I0412 18:34:53.651433 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-lib-modules\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651527 kubelet[2402]: I0412 18:34:53.651510 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-kernel\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651637 kubelet[2402]: I0412 18:34:53.651625 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-bpf-maps\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651735 kubelet[2402]: I0412 18:34:53.651724 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-hostproc\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651831 kubelet[2402]: I0412 18:34:53.651822 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-xtables-lock\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.651922 kubelet[2402]: I0412 18:34:53.651913 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-hubble-tls\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.652026 kubelet[2402]: I0412 18:34:53.652013 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzjc\" (UniqueName: \"kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-kube-api-access-bwzjc\") pod \"cilium-4668m\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " pod="kube-system/cilium-4668m" Apr 12 18:34:53.998918 sshd[4125]: Accepted publickey for core from 10.200.12.6 port 35912 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:54.000185 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:54.004664 systemd[1]: Started session-25.scope. Apr 12 18:34:54.005930 systemd-logind[1301]: New session 25 of user core. Apr 12 18:34:54.335096 kubelet[2402]: E0412 18:34:54.335032 2402 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-4668m" podUID="abe192ef-f07d-4a9b-b79f-f23256b00a25" Apr 12 18:34:54.384215 sshd[4125]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:54.387000 systemd[1]: sshd@22-10.200.20.15:22-10.200.12.6:35912.service: Deactivated successfully. Apr 12 18:34:54.387747 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:34:54.388298 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:34:54.389153 systemd-logind[1301]: Removed session 25. Apr 12 18:34:54.450409 systemd[1]: Started sshd@23-10.200.20.15:22-10.200.12.6:35916.service. Apr 12 18:34:54.760111 kubelet[2402]: E0412 18:34:54.759564 2402 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 12 18:34:54.760447 kubelet[2402]: E0412 18:34:54.760433 2402 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-4668m: failed to sync secret cache: timed out waiting for the condition Apr 12 18:34:54.760577 kubelet[2402]: E0412 18:34:54.760565 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-hubble-tls podName:abe192ef-f07d-4a9b-b79f-f23256b00a25 nodeName:}" failed. No retries permitted until 2024-04-12 18:34:55.260545423 +0000 UTC m=+238.890725710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-hubble-tls") pod "cilium-4668m" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:34:54.844985 sshd[4139]: Accepted publickey for core from 10.200.12.6 port 35916 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:54.846689 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:54.850611 systemd-logind[1301]: New session 26 of user core. Apr 12 18:34:54.851358 systemd[1]: Started session-26.scope. Apr 12 18:34:55.067186 kubelet[2402]: I0412 18:34:55.067087 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-net\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067363 kubelet[2402]: I0412 18:34:55.067352 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-config-path\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067459 kubelet[2402]: I0412 18:34:55.067448 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-run\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067526 kubelet[2402]: I0412 18:34:55.067518 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-hostproc\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067590 kubelet[2402]: I0412 18:34:55.067582 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-cgroup\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067661 kubelet[2402]: I0412 18:34:55.067651 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-lib-modules\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067729 kubelet[2402]: I0412 18:34:55.067720 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-bpf-maps\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067798 kubelet[2402]: I0412 18:34:55.067789 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-kernel\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067865 kubelet[2402]: I0412 18:34:55.067856 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cni-path\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.067925 kubelet[2402]: I0412 18:34:55.067917 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-etc-cni-netd\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.068000 kubelet[2402]: I0412 18:34:55.067992 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-clustermesh-secrets\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.068090 kubelet[2402]: I0412 18:34:55.068079 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-ipsec-secrets\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.068221 kubelet[2402]: I0412 18:34:55.068210 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-xtables-lock\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.068304 kubelet[2402]: I0412 18:34:55.068295 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwzjc\" (UniqueName: \"kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-kube-api-access-bwzjc\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.068955 kubelet[2402]: I0412 18:34:55.067176 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070097 kubelet[2402]: I0412 18:34:55.069072 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070097 kubelet[2402]: I0412 18:34:55.069111 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070097 kubelet[2402]: I0412 18:34:55.069129 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-hostproc" (OuterVolumeSpecName: "hostproc") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070097 kubelet[2402]: I0412 18:34:55.069145 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070097 kubelet[2402]: I0412 18:34:55.069161 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070648 kubelet[2402]: I0412 18:34:55.069178 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070648 kubelet[2402]: I0412 18:34:55.069426 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cni-path" (OuterVolumeSpecName: "cni-path") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070648 kubelet[2402]: I0412 18:34:55.069452 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.070648 kubelet[2402]: I0412 18:34:55.069470 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:55.073857 systemd[1]: var-lib-kubelet-pods-abe192ef\x2df07d\x2d4a9b\x2db79f\x2df23256b00a25-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbwzjc.mount: Deactivated successfully. Apr 12 18:34:55.075341 kubelet[2402]: I0412 18:34:55.075312 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:34:55.076271 kubelet[2402]: I0412 18:34:55.076234 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-kube-api-access-bwzjc" (OuterVolumeSpecName: "kube-api-access-bwzjc") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "kube-api-access-bwzjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:55.079352 systemd[1]: var-lib-kubelet-pods-abe192ef\x2df07d\x2d4a9b\x2db79f\x2df23256b00a25-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:34:55.079443 systemd[1]: var-lib-kubelet-pods-abe192ef\x2df07d\x2d4a9b\x2db79f\x2df23256b00a25-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:34:55.080595 kubelet[2402]: I0412 18:34:55.080552 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:34:55.080833 kubelet[2402]: I0412 18:34:55.080799 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:34:55.169205 kubelet[2402]: I0412 18:34:55.169171 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-run\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169397 kubelet[2402]: I0412 18:34:55.169386 2402 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-hostproc\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169476 kubelet[2402]: I0412 18:34:55.169467 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-cgroup\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169544 kubelet[2402]: I0412 18:34:55.169535 2402 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-lib-modules\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169610 kubelet[2402]: I0412 18:34:55.169601 2402 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-bpf-maps\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169682 kubelet[2402]: I0412 18:34:55.169673 2402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169745 kubelet[2402]: I0412 18:34:55.169737 2402 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-cni-path\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169815 kubelet[2402]: I0412 18:34:55.169806 2402 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-etc-cni-netd\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169880 kubelet[2402]: I0412 18:34:55.169868 2402 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-clustermesh-secrets\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.169949 kubelet[2402]: I0412 18:34:55.169940 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-ipsec-secrets\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.170012 kubelet[2402]: I0412 18:34:55.170004 2402 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-xtables-lock\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.170087 kubelet[2402]: I0412 18:34:55.170078 2402 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bwzjc\" (UniqueName: \"kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-kube-api-access-bwzjc\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.170159 kubelet[2402]: I0412 18:34:55.170150 2402 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abe192ef-f07d-4a9b-b79f-f23256b00a25-host-proc-sys-net\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.170231 kubelet[2402]: I0412 18:34:55.170222 2402 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abe192ef-f07d-4a9b-b79f-f23256b00a25-cilium-config-path\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.371260 kubelet[2402]: I0412 18:34:55.371230 2402 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-hubble-tls\") pod \"abe192ef-f07d-4a9b-b79f-f23256b00a25\" (UID: \"abe192ef-f07d-4a9b-b79f-f23256b00a25\") " Apr 12 18:34:55.375590 kubelet[2402]: I0412 18:34:55.375559 2402 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "abe192ef-f07d-4a9b-b79f-f23256b00a25" (UID: "abe192ef-f07d-4a9b-b79f-f23256b00a25"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:55.376264 systemd[1]: var-lib-kubelet-pods-abe192ef\x2df07d\x2d4a9b\x2db79f\x2df23256b00a25-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:34:55.472671 kubelet[2402]: I0412 18:34:55.472634 2402 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abe192ef-f07d-4a9b-b79f-f23256b00a25-hubble-tls\") on node \"ci-3510.3.3-a-58e6b5da18\" DevicePath \"\"" Apr 12 18:34:55.982404 systemd[1]: Removed slice kubepods-burstable-podabe192ef_f07d_4a9b_b79f_f23256b00a25.slice. Apr 12 18:34:56.011946 kubelet[2402]: I0412 18:34:56.011906 2402 topology_manager.go:215] "Topology Admit Handler" podUID="5f23f287-3108-40d6-a49d-077d2c6405b3" podNamespace="kube-system" podName="cilium-58fcw" Apr 12 18:34:56.016840 systemd[1]: Created slice kubepods-burstable-pod5f23f287_3108_40d6_a49d_077d2c6405b3.slice. Apr 12 18:34:56.077129 kubelet[2402]: I0412 18:34:56.077052 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-cilium-run\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.077367 kubelet[2402]: I0412 18:34:56.077354 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-lib-modules\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.077475 kubelet[2402]: I0412 18:34:56.077464 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f23f287-3108-40d6-a49d-077d2c6405b3-clustermesh-secrets\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.077572 kubelet[2402]: I0412 18:34:56.077562 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl672\" (UniqueName: \"kubernetes.io/projected/5f23f287-3108-40d6-a49d-077d2c6405b3-kube-api-access-sl672\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.077666 kubelet[2402]: I0412 18:34:56.077657 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-host-proc-sys-net\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.077764 kubelet[2402]: I0412 18:34:56.077754 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f23f287-3108-40d6-a49d-077d2c6405b3-hubble-tls\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.077865 kubelet[2402]: I0412 18:34:56.077855 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-etc-cni-netd\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.077953 kubelet[2402]: I0412 18:34:56.077944 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-xtables-lock\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.078055 kubelet[2402]: I0412 18:34:56.078045 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f23f287-3108-40d6-a49d-077d2c6405b3-cilium-config-path\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.078189 kubelet[2402]: I0412 18:34:56.078168 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-cni-path\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.078293 kubelet[2402]: I0412 18:34:56.078283 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5f23f287-3108-40d6-a49d-077d2c6405b3-cilium-ipsec-secrets\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.078394 kubelet[2402]: I0412 18:34:56.078384 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-bpf-maps\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.078513 kubelet[2402]: I0412 18:34:56.078483 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-host-proc-sys-kernel\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.078556 kubelet[2402]: I0412 18:34:56.078542 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-hostproc\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.078587 kubelet[2402]: I0412 18:34:56.078564 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f23f287-3108-40d6-a49d-077d2c6405b3-cilium-cgroup\") pod \"cilium-58fcw\" (UID: \"5f23f287-3108-40d6-a49d-077d2c6405b3\") " pod="kube-system/cilium-58fcw" Apr 12 18:34:56.319749 env[1311]: time="2024-04-12T18:34:56.319638999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58fcw,Uid:5f23f287-3108-40d6-a49d-077d2c6405b3,Namespace:kube-system,Attempt:0,}" Apr 12 18:34:56.356107 env[1311]: time="2024-04-12T18:34:56.355183610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:34:56.356107 env[1311]: time="2024-04-12T18:34:56.355230810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:34:56.356107 env[1311]: time="2024-04-12T18:34:56.355241370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:34:56.356107 env[1311]: time="2024-04-12T18:34:56.355374289Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9 pid=4164 runtime=io.containerd.runc.v2 Apr 12 18:34:56.365973 systemd[1]: Started cri-containerd-55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9.scope. Apr 12 18:34:56.389795 env[1311]: time="2024-04-12T18:34:56.389750669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58fcw,Uid:5f23f287-3108-40d6-a49d-077d2c6405b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\"" Apr 12 18:34:56.392766 env[1311]: time="2024-04-12T18:34:56.392665327Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:34:56.441490 env[1311]: time="2024-04-12T18:34:56.441434679Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c9959181e7f9fe84bdbe2504154ccef9a28a4f26bf49abd7f0c0b6a6fd81b23\"" Apr 12 18:34:56.442229 env[1311]: time="2024-04-12T18:34:56.442187553Z" level=info msg="StartContainer for \"0c9959181e7f9fe84bdbe2504154ccef9a28a4f26bf49abd7f0c0b6a6fd81b23\"" Apr 12 18:34:56.457918 systemd[1]: Started cri-containerd-0c9959181e7f9fe84bdbe2504154ccef9a28a4f26bf49abd7f0c0b6a6fd81b23.scope. Apr 12 18:34:56.501108 env[1311]: time="2024-04-12T18:34:56.501052028Z" level=info msg="StartContainer for \"0c9959181e7f9fe84bdbe2504154ccef9a28a4f26bf49abd7f0c0b6a6fd81b23\" returns successfully" Apr 12 18:34:56.502881 kubelet[2402]: I0412 18:34:56.502684 2402 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="abe192ef-f07d-4a9b-b79f-f23256b00a25" path="/var/lib/kubelet/pods/abe192ef-f07d-4a9b-b79f-f23256b00a25/volumes" Apr 12 18:34:56.508338 systemd[1]: cri-containerd-0c9959181e7f9fe84bdbe2504154ccef9a28a4f26bf49abd7f0c0b6a6fd81b23.scope: Deactivated successfully. Apr 12 18:34:56.511966 env[1311]: time="2024-04-12T18:34:56.511795627Z" level=info msg="StopPodSandbox for \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\"" Apr 12 18:34:56.511966 env[1311]: time="2024-04-12T18:34:56.511881466Z" level=info msg="TearDown network for sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" successfully" Apr 12 18:34:56.511966 env[1311]: time="2024-04-12T18:34:56.511912786Z" level=info msg="StopPodSandbox for \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" returns successfully" Apr 12 18:34:56.512555 env[1311]: time="2024-04-12T18:34:56.512526181Z" level=info msg="RemovePodSandbox for \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\"" Apr 12 18:34:56.512636 env[1311]: time="2024-04-12T18:34:56.512558861Z" level=info msg="Forcibly stopping sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\"" Apr 12 18:34:56.512636 env[1311]: time="2024-04-12T18:34:56.512615581Z" level=info msg="TearDown network for sandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" successfully" Apr 12 18:34:56.537021 env[1311]: time="2024-04-12T18:34:56.536962437Z" level=info msg="RemovePodSandbox \"9c9dc4eee6872de1bc07b2c69b0f10bbb0cee51dd0edcc4ec70eb27c65c25535\" returns successfully" Apr 12 18:34:56.537933 env[1311]: time="2024-04-12T18:34:56.537907590Z" level=info msg="StopPodSandbox for \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\"" Apr 12 18:34:56.538174 env[1311]: time="2024-04-12T18:34:56.538133508Z" level=info msg="TearDown network for sandbox \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" successfully" Apr 12 18:34:56.538243 env[1311]: time="2024-04-12T18:34:56.538227507Z" level=info msg="StopPodSandbox for \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" returns successfully" Apr 12 18:34:56.539393 env[1311]: time="2024-04-12T18:34:56.539361899Z" level=info msg="RemovePodSandbox for \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\"" Apr 12 18:34:56.539457 env[1311]: time="2024-04-12T18:34:56.539392698Z" level=info msg="Forcibly stopping sandbox \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\"" Apr 12 18:34:56.539503 env[1311]: time="2024-04-12T18:34:56.539460018Z" level=info msg="TearDown network for sandbox \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" successfully" Apr 12 18:34:56.563951 env[1311]: time="2024-04-12T18:34:56.563887833Z" level=info msg="RemovePodSandbox \"686c3f174cbcdf0189e7eb4179f9e9273d4b27a93b4ee60cd1bf3a0db4fb9a3f\" returns successfully" Apr 12 18:34:56.616190 kubelet[2402]: E0412 18:34:56.616165 2402 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:34:56.635924 env[1311]: time="2024-04-12T18:34:56.635870089Z" level=info msg="shim disconnected" id=0c9959181e7f9fe84bdbe2504154ccef9a28a4f26bf49abd7f0c0b6a6fd81b23 Apr 12 18:34:56.635924 env[1311]: time="2024-04-12T18:34:56.635917889Z" level=warning msg="cleaning up after shim disconnected" id=0c9959181e7f9fe84bdbe2504154ccef9a28a4f26bf49abd7f0c0b6a6fd81b23 namespace=k8s.io Apr 12 18:34:56.635924 env[1311]: time="2024-04-12T18:34:56.635930249Z" level=info msg="cleaning up dead shim" Apr 12 18:34:56.642359 env[1311]: time="2024-04-12T18:34:56.642313161Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4251 runtime=io.containerd.runc.v2\n" Apr 12 18:34:56.984580 env[1311]: time="2024-04-12T18:34:56.984464135Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:34:57.056860 env[1311]: time="2024-04-12T18:34:57.056799190Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"59e95d91dc3945ff7631fbd2f8bcd3fba0fa05bedf920b9390029eb530504f23\"" Apr 12 18:34:57.057832 env[1311]: time="2024-04-12T18:34:57.057792262Z" level=info msg="StartContainer for \"59e95d91dc3945ff7631fbd2f8bcd3fba0fa05bedf920b9390029eb530504f23\"" Apr 12 18:34:57.071627 systemd[1]: Started cri-containerd-59e95d91dc3945ff7631fbd2f8bcd3fba0fa05bedf920b9390029eb530504f23.scope. Apr 12 18:34:57.114664 env[1311]: time="2024-04-12T18:34:57.114606835Z" level=info msg="StartContainer for \"59e95d91dc3945ff7631fbd2f8bcd3fba0fa05bedf920b9390029eb530504f23\" returns successfully" Apr 12 18:34:57.119849 systemd[1]: cri-containerd-59e95d91dc3945ff7631fbd2f8bcd3fba0fa05bedf920b9390029eb530504f23.scope: Deactivated successfully. Apr 12 18:34:57.159101 env[1311]: time="2024-04-12T18:34:57.158502304Z" level=info msg="shim disconnected" id=59e95d91dc3945ff7631fbd2f8bcd3fba0fa05bedf920b9390029eb530504f23 Apr 12 18:34:57.159101 env[1311]: time="2024-04-12T18:34:57.158569424Z" level=warning msg="cleaning up after shim disconnected" id=59e95d91dc3945ff7631fbd2f8bcd3fba0fa05bedf920b9390029eb530504f23 namespace=k8s.io Apr 12 18:34:57.159101 env[1311]: time="2024-04-12T18:34:57.158579424Z" level=info msg="cleaning up dead shim" Apr 12 18:34:57.173413 env[1311]: time="2024-04-12T18:34:57.173366472Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4313 runtime=io.containerd.runc.v2\n" Apr 12 18:34:57.987577 env[1311]: time="2024-04-12T18:34:57.987494423Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:34:58.035568 env[1311]: time="2024-04-12T18:34:58.035513102Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b\"" Apr 12 18:34:58.036321 env[1311]: time="2024-04-12T18:34:58.036281336Z" level=info msg="StartContainer for \"dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b\"" Apr 12 18:34:58.059130 systemd[1]: Started cri-containerd-dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b.scope. Apr 12 18:34:58.090043 systemd[1]: cri-containerd-dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b.scope: Deactivated successfully. Apr 12 18:34:58.095138 env[1311]: time="2024-04-12T18:34:58.095088015Z" level=info msg="StartContainer for \"dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b\" returns successfully" Apr 12 18:34:58.125511 env[1311]: time="2024-04-12T18:34:58.125461547Z" level=info msg="shim disconnected" id=dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b Apr 12 18:34:58.125511 env[1311]: time="2024-04-12T18:34:58.125509587Z" level=warning msg="cleaning up after shim disconnected" id=dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b namespace=k8s.io Apr 12 18:34:58.125511 env[1311]: time="2024-04-12T18:34:58.125518827Z" level=info msg="cleaning up dead shim" Apr 12 18:34:58.132999 env[1311]: time="2024-04-12T18:34:58.132921651Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4371 runtime=io.containerd.runc.v2\n" Apr 12 18:34:58.183812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dee80400315df365c1120c30d5867b2085fd37bb85c2800d57a055a653d48c1b-rootfs.mount: Deactivated successfully. Apr 12 18:34:58.990784 env[1311]: time="2024-04-12T18:34:58.990741256Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:34:59.023856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732288219.mount: Deactivated successfully. Apr 12 18:34:59.054921 env[1311]: time="2024-04-12T18:34:59.054874857Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1\"" Apr 12 18:34:59.055687 env[1311]: time="2024-04-12T18:34:59.055650771Z" level=info msg="StartContainer for \"81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1\"" Apr 12 18:34:59.075016 systemd[1]: Started cri-containerd-81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1.scope. Apr 12 18:34:59.106471 systemd[1]: cri-containerd-81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1.scope: Deactivated successfully. Apr 12 18:34:59.113269 env[1311]: time="2024-04-12T18:34:59.113224621Z" level=info msg="StartContainer for \"81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1\" returns successfully" Apr 12 18:34:59.143266 env[1311]: time="2024-04-12T18:34:59.143215676Z" level=info msg="shim disconnected" id=81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1 Apr 12 18:34:59.143266 env[1311]: time="2024-04-12T18:34:59.143263956Z" level=warning msg="cleaning up after shim disconnected" id=81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1 namespace=k8s.io Apr 12 18:34:59.143266 env[1311]: time="2024-04-12T18:34:59.143273596Z" level=info msg="cleaning up dead shim" Apr 12 18:34:59.150339 env[1311]: time="2024-04-12T18:34:59.150292584Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4428 runtime=io.containerd.runc.v2\n" Apr 12 18:34:59.183879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e0982b892172a94e53ba772b63a080f56d7b8b3c80013c7e1f36cce3aa35b1-rootfs.mount: Deactivated successfully. Apr 12 18:34:59.995694 env[1311]: time="2024-04-12T18:34:59.995639065Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:35:00.076986 env[1311]: time="2024-04-12T18:35:00.076920019Z" level=info msg="CreateContainer within sandbox \"55245901a9b817d7aaca6c269dcfec2221054c7510d8ba60c6e643418590b0c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2\"" Apr 12 18:35:00.077600 env[1311]: time="2024-04-12T18:35:00.077577815Z" level=info msg="StartContainer for \"1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2\"" Apr 12 18:35:00.097076 systemd[1]: Started cri-containerd-1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2.scope. Apr 12 18:35:00.130095 env[1311]: time="2024-04-12T18:35:00.130026944Z" level=info msg="StartContainer for \"1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2\" returns successfully" Apr 12 18:35:00.627089 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Apr 12 18:35:01.011833 kubelet[2402]: I0412 18:35:01.011718 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-58fcw" podStartSLOduration=5.011683498 podCreationTimestamp="2024-04-12 18:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:35:01.011135662 +0000 UTC m=+244.641315949" watchObservedRunningTime="2024-04-12 18:35:01.011683498 +0000 UTC m=+244.641863825" Apr 12 18:35:01.022002 kubelet[2402]: I0412 18:35:01.021965 2402 setters.go:552] "Node became not ready" node="ci-3510.3.3-a-58e6b5da18" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-12T18:35:01Z","lastTransitionTime":"2024-04-12T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 12 18:35:01.270274 systemd[1]: run-containerd-runc-k8s.io-1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2-runc.FEeVc4.mount: Deactivated successfully. Apr 12 18:35:03.072930 systemd-networkd[1460]: lxc_health: Link UP Apr 12 18:35:03.103190 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:35:03.103381 systemd-networkd[1460]: lxc_health: Gained carrier Apr 12 18:35:04.492172 systemd-networkd[1460]: lxc_health: Gained IPv6LL Apr 12 18:35:05.684942 systemd[1]: run-containerd-runc-k8s.io-1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2-runc.tLZHPg.mount: Deactivated successfully. Apr 12 18:35:07.826375 systemd[1]: run-containerd-runc-k8s.io-1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2-runc.NiOlCU.mount: Deactivated successfully. Apr 12 18:35:09.991737 systemd[1]: run-containerd-runc-k8s.io-1780fdd481f391840d465c1ef4b90a7e4d6ddc35c4aa6e02ac0749a581cdbac2-runc.CKd79l.mount: Deactivated successfully. Apr 12 18:35:10.108828 sshd[4139]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:10.111798 systemd[1]: sshd@23-10.200.20.15:22-10.200.12.6:35916.service: Deactivated successfully. Apr 12 18:35:10.112528 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:35:10.113105 systemd-logind[1301]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:35:10.114040 systemd-logind[1301]: Removed session 26. Apr 12 18:35:34.916762 update_engine[1303]: I0412 18:35:34.916721 1303 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 12 18:35:34.916762 update_engine[1303]: I0412 18:35:34.916759 1303 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 12 18:35:34.917265 update_engine[1303]: I0412 18:35:34.916899 1303 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 12 18:35:34.917300 update_engine[1303]: I0412 18:35:34.917271 1303 omaha_request_params.cc:62] Current group set to lts Apr 12 18:35:34.917486 update_engine[1303]: I0412 18:35:34.917364 1303 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 12 18:35:34.917486 update_engine[1303]: I0412 18:35:34.917373 1303 update_attempter.cc:643] Scheduling an action processor start. Apr 12 18:35:34.917486 update_engine[1303]: I0412 18:35:34.917388 1303 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 12 18:35:34.917486 update_engine[1303]: I0412 18:35:34.917409 1303 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 12 18:35:34.917808 locksmithd[1392]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 12 18:35:34.918101 update_engine[1303]: I0412 18:35:34.918078 1303 omaha_request_action.cc:270] Posting an Omaha request to disabled Apr 12 18:35:34.918101 update_engine[1303]: I0412 18:35:34.918097 1303 omaha_request_action.cc:271] Request: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: Apr 12 18:35:34.918101 update_engine[1303]: I0412 18:35:34.918102 1303 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 18:35:34.918909 update_engine[1303]: I0412 18:35:34.918885 1303 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 18:35:34.919113 update_engine[1303]: I0412 18:35:34.919095 1303 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 12 18:35:34.932295 update_engine[1303]: E0412 18:35:34.932266 1303 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 18:35:34.932397 update_engine[1303]: I0412 18:35:34.932361 1303 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 12 18:35:44.903136 update_engine[1303]: I0412 18:35:44.903095 1303 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 18:35:44.903451 update_engine[1303]: I0412 18:35:44.903268 1303 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 18:35:44.903482 update_engine[1303]: I0412 18:35:44.903460 1303 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 12 18:35:45.009939 update_engine[1303]: E0412 18:35:45.009903 1303 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 18:35:45.010139 update_engine[1303]: I0412 18:35:45.010018 1303 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 12 18:35:54.899477 update_engine[1303]: I0412 18:35:54.899431 1303 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 18:35:54.899795 update_engine[1303]: I0412 18:35:54.899622 1303 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 18:35:54.899826 update_engine[1303]: I0412 18:35:54.899814 1303 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 12 18:35:54.942404 update_engine[1303]: E0412 18:35:54.942368 1303 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 18:35:54.942545 update_engine[1303]: I0412 18:35:54.942481 1303 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 12 18:35:56.558430 kubelet[2402]: W0412 18:35:56.558403 2402 machine.go:65] Cannot read vendor id correctly, set empty. Apr 12 18:35:57.801798 systemd[1]: cri-containerd-62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872.scope: Deactivated successfully. Apr 12 18:35:57.802104 systemd[1]: cri-containerd-62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872.scope: Consumed 3.333s CPU time. Apr 12 18:35:57.820627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872-rootfs.mount: Deactivated successfully. Apr 12 18:35:57.841969 env[1311]: time="2024-04-12T18:35:57.841920288Z" level=info msg="shim disconnected" id=62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872 Apr 12 18:35:57.841969 env[1311]: time="2024-04-12T18:35:57.841967528Z" level=warning msg="cleaning up after shim disconnected" id=62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872 namespace=k8s.io Apr 12 18:35:57.842373 env[1311]: time="2024-04-12T18:35:57.841977688Z" level=info msg="cleaning up dead shim" Apr 12 18:35:57.848682 env[1311]: time="2024-04-12T18:35:57.848637085Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5132 runtime=io.containerd.runc.v2\n" Apr 12 18:35:58.092587 kubelet[2402]: I0412 18:35:58.092520 2402 scope.go:117] "RemoveContainer" containerID="62a034db008e62630ad506d111b9495d47a504f841bc75949d58f0be2b5ba872" Apr 12 18:35:58.096207 env[1311]: time="2024-04-12T18:35:58.096173647Z" level=info msg="CreateContainer within sandbox \"6a17a35b31216656b8116fa9d81c8ed067214fcb4310da17f898f0d47b58ea52\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 12 18:35:58.124679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170703677.mount: Deactivated successfully. Apr 12 18:35:58.126553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903743210.mount: Deactivated successfully. Apr 12 18:35:58.142526 env[1311]: time="2024-04-12T18:35:58.142481949Z" level=info msg="CreateContainer within sandbox \"6a17a35b31216656b8116fa9d81c8ed067214fcb4310da17f898f0d47b58ea52\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1d82adc652f081dc038947fc4a96a270e7e505eb9c4e4574eb7457eda29338f1\"" Apr 12 18:35:58.143189 env[1311]: time="2024-04-12T18:35:58.143166144Z" level=info msg="StartContainer for \"1d82adc652f081dc038947fc4a96a270e7e505eb9c4e4574eb7457eda29338f1\"" Apr 12 18:35:58.160006 systemd[1]: Started cri-containerd-1d82adc652f081dc038947fc4a96a270e7e505eb9c4e4574eb7457eda29338f1.scope. Apr 12 18:35:58.205745 env[1311]: time="2024-04-12T18:35:58.205654101Z" level=info msg="StartContainer for \"1d82adc652f081dc038947fc4a96a270e7e505eb9c4e4574eb7457eda29338f1\" returns successfully" Apr 12 18:35:58.256235 kubelet[2402]: E0412 18:35:58.255954 2402 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.15:40746->10.200.20.20:2379: read: connection timed out" Apr 12 18:35:58.258546 systemd[1]: cri-containerd-ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6.scope: Deactivated successfully. Apr 12 18:35:58.258836 systemd[1]: cri-containerd-ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6.scope: Consumed 2.010s CPU time. Apr 12 18:35:58.300202 env[1311]: time="2024-04-12T18:35:58.300019853Z" level=info msg="shim disconnected" id=ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6 Apr 12 18:35:58.300546 env[1311]: time="2024-04-12T18:35:58.300524250Z" level=warning msg="cleaning up after shim disconnected" id=ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6 namespace=k8s.io Apr 12 18:35:58.300632 env[1311]: time="2024-04-12T18:35:58.300618849Z" level=info msg="cleaning up dead shim" Apr 12 18:35:58.307510 env[1311]: time="2024-04-12T18:35:58.307474405Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5193 runtime=io.containerd.runc.v2\n" Apr 12 18:35:58.819425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6-rootfs.mount: Deactivated successfully. Apr 12 18:35:59.106496 kubelet[2402]: I0412 18:35:59.106464 2402 scope.go:117] "RemoveContainer" containerID="ff323ed27b79d4a6a330635d551034000ac4396f47e846366e5d529cb27e31e6" Apr 12 18:35:59.108254 env[1311]: time="2024-04-12T18:35:59.108202844Z" level=info msg="CreateContainer within sandbox \"66349284332c8ab88b8003a4e960e02d0405fa34da8adc6b53d031f744b3f00a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 12 18:35:59.201430 env[1311]: time="2024-04-12T18:35:59.201377645Z" level=info msg="CreateContainer within sandbox \"66349284332c8ab88b8003a4e960e02d0405fa34da8adc6b53d031f744b3f00a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4b4e4c202ce867d3f006bd60b971b861401ee1d06024a0d22fadb6208f6389da\"" Apr 12 18:35:59.202182 env[1311]: time="2024-04-12T18:35:59.202143320Z" level=info msg="StartContainer for \"4b4e4c202ce867d3f006bd60b971b861401ee1d06024a0d22fadb6208f6389da\"" Apr 12 18:35:59.218652 systemd[1]: Started cri-containerd-4b4e4c202ce867d3f006bd60b971b861401ee1d06024a0d22fadb6208f6389da.scope. Apr 12 18:35:59.270764 env[1311]: time="2024-04-12T18:35:59.270704358Z" level=info msg="StartContainer for \"4b4e4c202ce867d3f006bd60b971b861401ee1d06024a0d22fadb6208f6389da\" returns successfully" Apr 12 18:36:01.871952 kubelet[2402]: E0412 18:36:01.871609 2402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.3-a-58e6b5da18.17c59c2d592d5ed6", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.3-a-58e6b5da18", UID:"b7e5aa51624dd658e12a88a954a0cfe1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.3-a-58e6b5da18"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 35, 51, 867948758, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 35, 51, 867948758, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.3-a-58e6b5da18"}': 'rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout' (will not retry!) Apr 12 18:36:04.895926 update_engine[1303]: I0412 18:36:04.895879 1303 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 18:36:04.896267 update_engine[1303]: I0412 18:36:04.896086 1303 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 18:36:04.896383 update_engine[1303]: I0412 18:36:04.896359 1303 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 12 18:36:04.930925 update_engine[1303]: E0412 18:36:04.930891 1303 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 18:36:04.931109 update_engine[1303]: I0412 18:36:04.930990 1303 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 12 18:36:04.931109 update_engine[1303]: I0412 18:36:04.930996 1303 omaha_request_action.cc:621] Omaha request response: Apr 12 18:36:04.931109 update_engine[1303]: E0412 18:36:04.931082 1303 omaha_request_action.cc:640] Omaha request network transfer failed. Apr 12 18:36:04.931109 update_engine[1303]: I0412 18:36:04.931095 1303 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 12 18:36:04.931109 update_engine[1303]: I0412 18:36:04.931098 1303 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 12 18:36:04.931109 update_engine[1303]: I0412 18:36:04.931102 1303 update_attempter.cc:306] Processing Done. Apr 12 18:36:04.931270 update_engine[1303]: E0412 18:36:04.931118 1303 update_attempter.cc:619] Update failed. Apr 12 18:36:04.931270 update_engine[1303]: I0412 18:36:04.931120 1303 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 12 18:36:04.931270 update_engine[1303]: I0412 18:36:04.931123 1303 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 12 18:36:04.931270 update_engine[1303]: I0412 18:36:04.931128 1303 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 12 18:36:04.931270 update_engine[1303]: I0412 18:36:04.931188 1303 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 12 18:36:04.931270 update_engine[1303]: I0412 18:36:04.931205 1303 omaha_request_action.cc:270] Posting an Omaha request to disabled Apr 12 18:36:04.931270 update_engine[1303]: I0412 18:36:04.931208 1303 omaha_request_action.cc:271] Request: Apr 12 18:36:04.931270 update_engine[1303]: Apr 12 18:36:04.931270 update_engine[1303]: Apr 12 18:36:04.931270 update_engine[1303]: Apr 12 18:36:04.931270 update_engine[1303]: Apr 12 18:36:04.931270 update_engine[1303]: Apr 12 18:36:04.931270 update_engine[1303]: Apr 12 18:36:04.931270 update_engine[1303]: I0412 18:36:04.931212 1303 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 18:36:04.931590 update_engine[1303]: I0412 18:36:04.931316 1303 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 18:36:04.931590 update_engine[1303]: I0412 18:36:04.931473 1303 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 12 18:36:04.931826 locksmithd[1392]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 12 18:36:04.942046 update_engine[1303]: E0412 18:36:04.942011 1303 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 18:36:04.942196 update_engine[1303]: I0412 18:36:04.942173 1303 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 12 18:36:04.942196 update_engine[1303]: I0412 18:36:04.942193 1303 omaha_request_action.cc:621] Omaha request response: Apr 12 18:36:04.942267 update_engine[1303]: I0412 18:36:04.942199 1303 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 12 18:36:04.942267 update_engine[1303]: I0412 18:36:04.942202 1303 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 12 18:36:04.942267 update_engine[1303]: I0412 18:36:04.942205 1303 update_attempter.cc:306] Processing Done. Apr 12 18:36:04.942267 update_engine[1303]: I0412 18:36:04.942209 1303 update_attempter.cc:310] Error event sent. Apr 12 18:36:04.942267 update_engine[1303]: I0412 18:36:04.942216 1303 update_check_scheduler.cc:74] Next update check in 41m9s Apr 12 18:36:04.942526 locksmithd[1392]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 12 18:36:08.257168 kubelet[2402]: E0412 18:36:08.257135 2402 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-58e6b5da18?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:36:08.557264 kubelet[2402]: I0412 18:36:08.557152 2402 status_manager.go:853] "Failed to get status for pod" podUID="b369eb2026983d89ec0c8393237a0a6e" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-58e6b5da18" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.15:40636->10.200.20.20:2379: read: connection timed out" Apr 12 18:36:18.258485 kubelet[2402]: E0412 18:36:18.258446 2402 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Apr 12 18:36:18.258968 kubelet[2402]: E0412 18:36:18.258506 2402 controller.go:193] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Apr 12 18:36:28.258995 kubelet[2402]: E0412 18:36:28.258924 2402 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-58e6b5da18?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:36:35.874202 kubelet[2402]: E0412 18:36:35.874095 2402 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.3-a-58e6b5da18.17c59c2d592d5ed6", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.3-a-58e6b5da18", UID:"b7e5aa51624dd658e12a88a954a0cfe1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.3-a-58e6b5da18"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 35, 51, 867948758, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 35, 55, 876256114, time.Local), Count:2, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.3-a-58e6b5da18"}': 'Timeout: request did not complete within requested timeout - context deadline exceeded' (will not retry!) Apr 12 18:36:38.260080 kubelet[2402]: E0412 18:36:38.260031 2402 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Apr 12 18:36:38.260492 kubelet[2402]: E0412 18:36:38.260474 2402 controller.go:193] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Apr 12 18:36:38.260569 kubelet[2402]: I0412 18:36:38.260559 2402 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 12 18:36:48.261084 kubelet[2402]: E0412 18:36:48.261040 2402 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-58e6b5da18?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 12 18:36:50.695725 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.696031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.719267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.719509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.734963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.735229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.757657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.757880 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.779341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.779593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.801497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.801749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.816328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.816551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.830827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.831039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.854205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.854445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.854560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.869971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.870198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.885707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.893445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.893618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.909754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.910009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.934430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.934720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.934837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.950854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.951094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.958870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.974372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.974602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.990591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:50.990830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.006551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.006796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.014134 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.029455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.029663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.044961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.045223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.062127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.062402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.078344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.078584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.086682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.102482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.102741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.118864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.119116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.135050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.135332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.151203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.151404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.166816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.167001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.191083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.191376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.191669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.199412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.214929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.215213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.233217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.233465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.253310 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.253579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.253694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.261267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.276537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.276786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.291769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.292077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.307823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.308097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.325163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.325423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.334209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.342009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.357197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.357432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.380805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.381124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.381249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.388465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.411576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.411849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.411961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.419341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.435586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.435799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.451657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.451933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.467247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.467466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.475147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.490786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.491047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.507399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.507686 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.515527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.531222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.531471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.547140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.547396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.563219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.563480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.573083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.587095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.587333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.602867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.603185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.619260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.627367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.627559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.643140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.643389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.657684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.657904 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.674261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.674495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.691134 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.691382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.707695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.707960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.724139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.724390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.741327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.741568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.757562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.757831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.774200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.774483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.790916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.791203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.806287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.806523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.830999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.831325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.831438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.839141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.859955 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.860248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.876295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.876547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.900293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.900588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.900703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.908322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.925288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.925559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.941104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.941377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.956780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.957005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.972465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.972712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.988174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:51.988468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.011974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.012267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.028227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.028474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.044484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.044591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.058084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.058330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.068093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.068322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.084828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.085079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.101912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.102113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.119303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.119561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.135614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.135851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.153718 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.153927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.170694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.170898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.186517 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.186773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.208096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.208379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.217985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.218221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.234412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.234658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.249294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.249484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.264529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.264731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.288352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.288626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.304988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.337704 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.362291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.362409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.362501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.362594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.362685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.362771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.362859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.372860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.373146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.380614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.388326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.408136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.408392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.416051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.423548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.442713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.442977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.450052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.457681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.473521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.473779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.480964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.505165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.505452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.505589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.522993 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.523262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.530578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.549870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.550084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.554290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.570807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.571045 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.578669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.603124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.603352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.603444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.620161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.620443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.629034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.654354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.654643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.654749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.671203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.671414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.679220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.695623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.695840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.711937 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.712189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.727750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.728007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.743973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.744216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.751796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.771222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.771467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.776344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.792694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.792915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.800380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.820081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.820334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.824076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.840766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.841019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.848828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.858594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.866521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.874598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.890692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.890914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.898501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.915099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.915383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.923026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.939571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.939817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.947654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.964419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.964681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.972649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.997920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.998176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:52.998295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.014782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.015039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.022744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.047887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.048148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.048282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.071834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.072126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.078553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.095435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.095707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.103856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.120742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.121109 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.128602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.144301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.144530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.152252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.175630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#72 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.175870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#71 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:36:53.175966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001