Mar 13 11:38:59.076658 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Mar 13 11:38:59.076677 kernel: Linux version 6.12.76-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Mar 13 08:28:58 -00 2026 Mar 13 11:38:59.076684 kernel: KASLR enabled Mar 13 11:38:59.076688 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 13 11:38:59.076691 kernel: printk: legacy bootconsole [pl11] enabled Mar 13 11:38:59.076697 kernel: efi: EFI v2.7 by EDK II Mar 13 11:38:59.076702 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Mar 13 11:38:59.076706 kernel: random: crng init done Mar 13 11:38:59.076710 kernel: secureboot: Secure boot disabled Mar 13 11:38:59.076714 kernel: ACPI: Early table checksum verification disabled Mar 13 11:38:59.076718 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Mar 13 11:38:59.076722 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076726 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076730 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 13 11:38:59.076736 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076740 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076744 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076749 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076753 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076758 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076762 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 13 11:38:59.076767 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 11:38:59.076771 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 13 11:38:59.076775 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 13 11:38:59.076779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 13 11:38:59.076784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Mar 13 11:38:59.076788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Mar 13 11:38:59.076792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 13 11:38:59.076796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 13 11:38:59.076801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 13 11:38:59.076806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 13 11:38:59.076810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 13 11:38:59.076814 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 13 11:38:59.076818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 13 11:38:59.076822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Mar 13 11:38:59.076826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Mar 13 11:38:59.076831 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Mar 13 11:38:59.076835 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Mar 13 11:38:59.076839 kernel: Zone ranges: Mar 13 11:38:59.076843 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 13 11:38:59.076850 kernel: DMA32 empty Mar 13 11:38:59.076855 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 11:38:59.076859 kernel: Device empty Mar 13 11:38:59.076864 kernel: Movable zone start for each node Mar 13 11:38:59.076868 kernel: Early memory node ranges Mar 13 11:38:59.076872 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 13 11:38:59.076878 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Mar 13 11:38:59.076882 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Mar 13 11:38:59.076886 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Mar 13 11:38:59.076891 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Mar 13 11:38:59.076895 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Mar 13 11:38:59.076899 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 11:38:59.076904 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 13 11:38:59.076908 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 13 11:38:59.076913 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Mar 13 11:38:59.076917 kernel: psci: probing for conduit method from ACPI. Mar 13 11:38:59.076922 kernel: psci: PSCIv1.3 detected in firmware. Mar 13 11:38:59.076926 kernel: psci: Using standard PSCI v0.2 function IDs Mar 13 11:38:59.076931 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 13 11:38:59.076936 kernel: psci: SMC Calling Convention v1.4 Mar 13 11:38:59.076940 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 13 11:38:59.076944 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 13 11:38:59.076949 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 13 11:38:59.076953 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 13 11:38:59.076958 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 13 11:38:59.076962 kernel: Detected PIPT I-cache on CPU0 Mar 13 11:38:59.076967 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Mar 13 11:38:59.076971 kernel: CPU features: detected: GIC system register CPU interface Mar 13 11:38:59.076975 kernel: CPU features: detected: Spectre-v4 Mar 13 11:38:59.076980 kernel: CPU features: detected: Spectre-BHB Mar 13 11:38:59.076985 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 13 11:38:59.076989 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 13 11:38:59.076994 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Mar 13 11:38:59.076998 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 13 11:38:59.077003 kernel: alternatives: applying boot alternatives Mar 13 11:38:59.077008 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f9e16a8c138b52ec7ccfdd04775db8b28ecb5c24f5f82b153afd0ea86ce994e7 Mar 13 11:38:59.077013 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 11:38:59.077017 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 11:38:59.077021 kernel: Fallback order for Node 0: 0 Mar 13 11:38:59.077026 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Mar 13 11:38:59.077031 kernel: Policy zone: Normal Mar 13 11:38:59.077035 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 11:38:59.077040 kernel: software IO TLB: area num 2. Mar 13 11:38:59.077044 kernel: software IO TLB: mapped [mem 0x00000000358f0000-0x00000000398f0000] (64MB) Mar 13 11:38:59.077048 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 11:38:59.077053 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 11:38:59.077058 kernel: rcu: RCU event tracing is enabled. Mar 13 11:38:59.077062 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 11:38:59.077067 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 11:38:59.077071 kernel: Tracing variant of Tasks RCU enabled. Mar 13 11:38:59.077076 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 11:38:59.077080 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 11:38:59.077085 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 11:38:59.077090 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 11:38:59.077094 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 13 11:38:59.077099 kernel: GICv3: 960 SPIs implemented Mar 13 11:38:59.077103 kernel: GICv3: 0 Extended SPIs implemented Mar 13 11:38:59.077107 kernel: Root IRQ handler: gic_handle_irq Mar 13 11:38:59.077112 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 13 11:38:59.077116 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Mar 13 11:38:59.077121 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 13 11:38:59.077125 kernel: ITS: No ITS available, not enabling LPIs Mar 13 11:38:59.077130 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 11:38:59.077135 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Mar 13 11:38:59.077139 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 11:38:59.077144 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Mar 13 11:38:59.077148 kernel: Console: colour dummy device 80x25 Mar 13 11:38:59.077153 kernel: printk: legacy console [tty1] enabled Mar 13 11:38:59.077158 kernel: ACPI: Core revision 20240827 Mar 13 11:38:59.077162 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Mar 13 11:38:59.077167 kernel: pid_max: default: 32768 minimum: 301 Mar 13 11:38:59.077171 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 11:38:59.077176 kernel: landlock: Up and running. Mar 13 11:38:59.077182 kernel: SELinux: Initializing. Mar 13 11:38:59.077186 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 11:38:59.077191 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 11:38:59.077196 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Mar 13 11:38:59.077200 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 13 11:38:59.077208 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 13 11:38:59.077214 kernel: rcu: Hierarchical SRCU implementation. Mar 13 11:38:59.077219 kernel: rcu: Max phase no-delay instances is 400. Mar 13 11:38:59.077224 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 11:38:59.077228 kernel: Remapping and enabling EFI services. Mar 13 11:38:59.077233 kernel: smp: Bringing up secondary CPUs ... Mar 13 11:38:59.077238 kernel: Detected PIPT I-cache on CPU1 Mar 13 11:38:59.077244 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 13 11:38:59.077249 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Mar 13 11:38:59.077253 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 11:38:59.077258 kernel: SMP: Total of 2 processors activated. Mar 13 11:38:59.077263 kernel: CPU: All CPU(s) started at EL1 Mar 13 11:38:59.077268 kernel: CPU features: detected: 32-bit EL0 Support Mar 13 11:38:59.077273 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 13 11:38:59.077278 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 13 11:38:59.077283 kernel: CPU features: detected: Common not Private translations Mar 13 11:38:59.077288 kernel: CPU features: detected: CRC32 instructions Mar 13 11:38:59.077292 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Mar 13 11:38:59.077297 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 13 11:38:59.077302 kernel: CPU features: detected: LSE atomic instructions Mar 13 11:38:59.077307 kernel: CPU features: detected: Privileged Access Never Mar 13 11:38:59.077312 kernel: CPU features: detected: Speculation barrier (SB) Mar 13 11:38:59.077317 kernel: CPU features: detected: TLB range maintenance instructions Mar 13 11:38:59.077322 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 13 11:38:59.077327 kernel: CPU features: detected: Scalable Vector Extension Mar 13 11:38:59.077331 kernel: alternatives: applying system-wide alternatives Mar 13 11:38:59.077336 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Mar 13 11:38:59.077341 kernel: SVE: maximum available vector length 16 bytes per vector Mar 13 11:38:59.077346 kernel: SVE: default vector length 16 bytes per vector Mar 13 11:38:59.077351 kernel: Memory: 3952764K/4194160K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 220208K reserved, 16384K cma-reserved) Mar 13 11:38:59.077356 kernel: devtmpfs: initialized Mar 13 11:38:59.077361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 11:38:59.077366 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 11:38:59.077371 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 13 11:38:59.077375 kernel: 0 pages in range for non-PLT usage Mar 13 11:38:59.077380 kernel: 508384 pages in range for PLT usage Mar 13 11:38:59.077385 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 11:38:59.077390 kernel: SMBIOS 3.1.0 present. Mar 13 11:38:59.077395 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 13 11:38:59.077400 kernel: DMI: Memory slots populated: 2/2 Mar 13 11:38:59.077405 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 11:38:59.077410 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 13 11:38:59.077415 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 13 11:38:59.077419 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 13 11:38:59.077424 kernel: audit: initializing netlink subsys (disabled) Mar 13 11:38:59.077429 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Mar 13 11:38:59.077434 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 11:38:59.077439 kernel: cpuidle: using governor menu Mar 13 11:38:59.077444 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 13 11:38:59.077449 kernel: ASID allocator initialised with 32768 entries Mar 13 11:38:59.077453 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 11:38:59.077458 kernel: Serial: AMBA PL011 UART driver Mar 13 11:38:59.077463 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 11:38:59.077468 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 11:38:59.077473 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 13 11:38:59.077477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 13 11:38:59.077483 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 11:38:59.077488 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 11:38:59.077493 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 13 11:38:59.077497 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 13 11:38:59.077502 kernel: ACPI: Added _OSI(Module Device) Mar 13 11:38:59.077507 kernel: ACPI: Added _OSI(Processor Device) Mar 13 11:38:59.077512 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 11:38:59.077516 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 11:38:59.077521 kernel: ACPI: Interpreter enabled Mar 13 11:38:59.077527 kernel: ACPI: Using GIC for interrupt routing Mar 13 11:38:59.077531 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 13 11:38:59.077536 kernel: printk: legacy console [ttyAMA0] enabled Mar 13 11:38:59.077541 kernel: printk: legacy bootconsole [pl11] disabled Mar 13 11:38:59.077546 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 13 11:38:59.077550 kernel: ACPI: CPU0 has been hot-added Mar 13 11:38:59.077555 kernel: ACPI: CPU1 has been hot-added Mar 13 11:38:59.077560 kernel: iommu: Default domain type: Translated Mar 13 11:38:59.077565 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 13 11:38:59.077570 kernel: efivars: Registered efivars operations Mar 13 11:38:59.077575 kernel: vgaarb: loaded Mar 13 11:38:59.077580 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 13 11:38:59.077584 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 11:38:59.077589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 11:38:59.077594 kernel: pnp: PnP ACPI init Mar 13 11:38:59.077598 kernel: pnp: PnP ACPI: found 0 devices Mar 13 11:38:59.077603 kernel: NET: Registered PF_INET protocol family Mar 13 11:38:59.077608 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 11:38:59.077613 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 11:38:59.077663 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 11:38:59.077668 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 11:38:59.077673 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 11:38:59.077678 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 11:38:59.077683 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 11:38:59.077688 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 11:38:59.077692 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 11:38:59.077697 kernel: PCI: CLS 0 bytes, default 64 Mar 13 11:38:59.077702 kernel: kvm [1]: HYP mode not available Mar 13 11:38:59.077708 kernel: Initialise system trusted keyrings Mar 13 11:38:59.077713 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 11:38:59.077717 kernel: Key type asymmetric registered Mar 13 11:38:59.077722 kernel: Asymmetric key parser 'x509' registered Mar 13 11:38:59.077727 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 13 11:38:59.077732 kernel: io scheduler mq-deadline registered Mar 13 11:38:59.077736 kernel: io scheduler kyber registered Mar 13 11:38:59.077741 kernel: io scheduler bfq registered Mar 13 11:38:59.077746 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 11:38:59.077752 kernel: thunder_xcv, ver 1.0 Mar 13 11:38:59.077756 kernel: thunder_bgx, ver 1.0 Mar 13 11:38:59.077761 kernel: nicpf, ver 1.0 Mar 13 11:38:59.077766 kernel: nicvf, ver 1.0 Mar 13 11:38:59.077884 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 13 11:38:59.077933 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-13T11:38:58 UTC (1773401938) Mar 13 11:38:59.077940 kernel: efifb: probing for efifb Mar 13 11:38:59.077947 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 13 11:38:59.077951 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 13 11:38:59.077956 kernel: efifb: scrolling: redraw Mar 13 11:38:59.077961 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 11:38:59.077966 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 11:38:59.077971 kernel: fb0: EFI VGA frame buffer device Mar 13 11:38:59.077975 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 13 11:38:59.077980 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 11:38:59.077985 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Mar 13 11:38:59.077991 kernel: watchdog: NMI not fully supported Mar 13 11:38:59.077996 kernel: watchdog: Hard watchdog permanently disabled Mar 13 11:38:59.078000 kernel: NET: Registered PF_INET6 protocol family Mar 13 11:38:59.078005 kernel: Segment Routing with IPv6 Mar 13 11:38:59.078010 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 11:38:59.078014 kernel: NET: Registered PF_PACKET protocol family Mar 13 11:38:59.078019 kernel: Key type dns_resolver registered Mar 13 11:38:59.078024 kernel: registered taskstats version 1 Mar 13 11:38:59.078028 kernel: Loading compiled-in X.509 certificates Mar 13 11:38:59.078033 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.76-flatcar: 44352bd7534bb4d91b45bb008ca12587e75556f4' Mar 13 11:38:59.078039 kernel: Demotion targets for Node 0: null Mar 13 11:38:59.078044 kernel: Key type .fscrypt registered Mar 13 11:38:59.078048 kernel: Key type fscrypt-provisioning registered Mar 13 11:38:59.078053 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 11:38:59.078058 kernel: ima: Allocated hash algorithm: sha1 Mar 13 11:38:59.078063 kernel: ima: No architecture policies found Mar 13 11:38:59.078067 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 13 11:38:59.078072 kernel: clk: Disabling unused clocks Mar 13 11:38:59.078077 kernel: PM: genpd: Disabling unused power domains Mar 13 11:38:59.078083 kernel: Warning: unable to open an initial console. Mar 13 11:38:59.078088 kernel: Freeing unused kernel memory: 39552K Mar 13 11:38:59.078092 kernel: Run /init as init process Mar 13 11:38:59.078097 kernel: with arguments: Mar 13 11:38:59.078102 kernel: /init Mar 13 11:38:59.078106 kernel: with environment: Mar 13 11:38:59.078111 kernel: HOME=/ Mar 13 11:38:59.078115 kernel: TERM=linux Mar 13 11:38:59.078121 systemd[1]: Successfully made /usr/ read-only. Mar 13 11:38:59.078129 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 11:38:59.078134 systemd[1]: Detected virtualization microsoft. Mar 13 11:38:59.078139 systemd[1]: Detected architecture arm64. Mar 13 11:38:59.078144 systemd[1]: Running in initrd. Mar 13 11:38:59.078149 systemd[1]: No hostname configured, using default hostname. Mar 13 11:38:59.078155 systemd[1]: Hostname set to . Mar 13 11:38:59.078160 systemd[1]: Initializing machine ID from random generator. Mar 13 11:38:59.078166 systemd[1]: Queued start job for default target initrd.target. Mar 13 11:38:59.078171 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 11:38:59.078176 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 11:38:59.078182 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 11:38:59.078187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 11:38:59.078192 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 11:38:59.078198 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 11:38:59.078205 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 11:38:59.078210 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 11:38:59.078215 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 11:38:59.078221 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 11:38:59.078226 systemd[1]: Reached target paths.target - Path Units. Mar 13 11:38:59.078231 systemd[1]: Reached target slices.target - Slice Units. Mar 13 11:38:59.078236 systemd[1]: Reached target swap.target - Swaps. Mar 13 11:38:59.078241 systemd[1]: Reached target timers.target - Timer Units. Mar 13 11:38:59.078247 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 11:38:59.078253 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 11:38:59.078258 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 11:38:59.078263 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 11:38:59.078268 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 11:38:59.078274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 11:38:59.078279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 11:38:59.078284 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 11:38:59.078289 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 11:38:59.078296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 11:38:59.078301 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 11:38:59.078306 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 11:38:59.078312 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 11:38:59.078317 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 11:38:59.078322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 11:38:59.078339 systemd-journald[225]: Collecting audit messages is disabled. Mar 13 11:38:59.078354 systemd-journald[225]: Journal started Mar 13 11:38:59.078368 systemd-journald[225]: Runtime Journal (/run/log/journal/e0429fc3674a483ca850d2e0bafa2d03) is 8M, max 78.3M, 70.3M free. Mar 13 11:38:59.089790 systemd-modules-load[227]: Inserted module 'overlay' Mar 13 11:38:59.094863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 11:38:59.111844 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 11:38:59.111897 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 11:38:59.118795 systemd-modules-load[227]: Inserted module 'br_netfilter' Mar 13 11:38:59.130070 kernel: Bridge firewalling registered Mar 13 11:38:59.127766 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 11:38:59.132874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 11:38:59.144646 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 11:38:59.151358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 11:38:59.158907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:38:59.170517 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 11:38:59.180711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 11:38:59.198384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 11:38:59.212870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 11:38:59.228269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 11:38:59.231347 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 11:38:59.240260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 11:38:59.248713 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 11:38:59.260280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 11:38:59.272271 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 11:38:59.302771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 11:38:59.315925 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f9e16a8c138b52ec7ccfdd04775db8b28ecb5c24f5f82b153afd0ea86ce994e7 Mar 13 11:38:59.347710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 11:38:59.366819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 11:38:59.389202 systemd-resolved[262]: Positive Trust Anchors: Mar 13 11:38:59.389224 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 11:38:59.389244 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 11:38:59.390928 systemd-resolved[262]: Defaulting to hostname 'linux'. Mar 13 11:38:59.444729 kernel: SCSI subsystem initialized Mar 13 11:38:59.392564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 11:38:59.452521 kernel: Loading iSCSI transport class v2.0-870. Mar 13 11:38:59.404489 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 11:38:59.461635 kernel: iscsi: registered transport (tcp) Mar 13 11:38:59.475436 kernel: iscsi: registered transport (qla4xxx) Mar 13 11:38:59.475496 kernel: QLogic iSCSI HBA Driver Mar 13 11:38:59.489330 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 11:38:59.509582 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 11:38:59.522284 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 11:38:59.567278 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 11:38:59.574765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 11:38:59.634641 kernel: raid6: neonx8 gen() 18541 MB/s Mar 13 11:38:59.651626 kernel: raid6: neonx4 gen() 18546 MB/s Mar 13 11:38:59.670628 kernel: raid6: neonx2 gen() 17099 MB/s Mar 13 11:38:59.690627 kernel: raid6: neonx1 gen() 15016 MB/s Mar 13 11:38:59.709643 kernel: raid6: int64x8 gen() 10555 MB/s Mar 13 11:38:59.728629 kernel: raid6: int64x4 gen() 10612 MB/s Mar 13 11:38:59.748626 kernel: raid6: int64x2 gen() 8991 MB/s Mar 13 11:38:59.770528 kernel: raid6: int64x1 gen() 7015 MB/s Mar 13 11:38:59.770537 kernel: raid6: using algorithm neonx4 gen() 18546 MB/s Mar 13 11:38:59.793511 kernel: raid6: .... xor() 15144 MB/s, rmw enabled Mar 13 11:38:59.793519 kernel: raid6: using neon recovery algorithm Mar 13 11:38:59.803007 kernel: xor: measuring software checksum speed Mar 13 11:38:59.803016 kernel: 8regs : 28619 MB/sec Mar 13 11:38:59.805838 kernel: 32regs : 28791 MB/sec Mar 13 11:38:59.808646 kernel: arm64_neon : 37559 MB/sec Mar 13 11:38:59.812047 kernel: xor: using function: arm64_neon (37559 MB/sec) Mar 13 11:38:59.851649 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 11:38:59.856868 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 11:38:59.867777 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 11:38:59.897228 systemd-udevd[473]: Using default interface naming scheme 'v255'. Mar 13 11:38:59.902344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 11:38:59.915683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 11:38:59.939179 dracut-pre-trigger[489]: rd.md=0: removing MD RAID activation Mar 13 11:38:59.961054 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 11:38:59.967634 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 11:39:00.020199 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 11:39:00.032749 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 11:39:00.099642 kernel: hv_vmbus: Vmbus version:5.3 Mar 13 11:39:00.115309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 11:39:00.115416 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:39:00.149220 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 11:39:00.149243 kernel: hv_vmbus: registering driver hid_hyperv Mar 13 11:39:00.149251 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 13 11:39:00.149270 kernel: hv_vmbus: registering driver hv_storvsc Mar 13 11:39:00.149276 kernel: hv_vmbus: registering driver hv_netvsc Mar 13 11:39:00.149284 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 11:39:00.139377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 11:39:00.180905 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 13 11:39:00.180929 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 13 11:39:00.180936 kernel: scsi host0: storvsc_host_t Mar 13 11:39:00.181078 kernel: scsi host1: storvsc_host_t Mar 13 11:39:00.168852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 11:39:00.212215 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 13 11:39:00.212381 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 13 11:39:00.212403 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 13 11:39:00.196175 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 11:39:00.230943 kernel: PTP clock support registered Mar 13 11:39:00.230971 kernel: hv_utils: Registering HyperV Utility Driver Mar 13 11:39:00.205829 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 11:39:00.716371 kernel: hv_vmbus: registering driver hv_utils Mar 13 11:39:00.716392 kernel: hv_utils: Shutdown IC version 3.2 Mar 13 11:39:00.716399 kernel: hv_utils: TimeSync IC version 4.0 Mar 13 11:39:00.716405 kernel: hv_utils: Heartbeat IC version 3.0 Mar 13 11:39:00.716422 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 13 11:39:00.716582 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 13 11:39:00.716651 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 11:39:00.716712 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 13 11:39:00.716786 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 13 11:39:00.716848 kernel: hv_netvsc 000d3a6d-241f-000d-3a6d-241f000d3a6d eth0: VF slot 1 added Mar 13 11:39:00.720205 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 11:39:00.205932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:39:00.222688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 11:39:00.663937 systemd-resolved[262]: Clock change detected. Flushing caches. Mar 13 11:39:00.739157 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 11:39:00.744908 kernel: hv_vmbus: registering driver hv_pci Mar 13 11:39:00.744963 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 13 11:39:00.753430 kernel: hv_pci 106c4003-ef6b-438a-903e-68bc9f19251b: PCI VMBus probing: Using version 0x10004 Mar 13 11:39:00.753628 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 11:39:00.757933 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 13 11:39:00.760823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:39:00.789521 kernel: hv_pci 106c4003-ef6b-438a-903e-68bc9f19251b: PCI host bridge to bus ef6b:00 Mar 13 11:39:00.790059 kernel: pci_bus ef6b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 13 11:39:00.790156 kernel: pci_bus ef6b:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 11:39:00.790212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#176 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 11:39:00.790276 kernel: pci ef6b:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Mar 13 11:39:00.801334 kernel: pci ef6b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 11:39:00.805918 kernel: pci ef6b:00:02.0: enabling Extended Tags Mar 13 11:39:00.817883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 11:39:00.818094 kernel: pci ef6b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ef6b:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Mar 13 11:39:00.844339 kernel: pci_bus ef6b:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 11:39:00.844527 kernel: pci ef6b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Mar 13 11:39:00.908427 kernel: mlx5_core ef6b:00:02.0: enabling device (0000 -> 0002) Mar 13 11:39:00.917912 kernel: mlx5_core ef6b:00:02.0: PTM is not supported by PCIe Mar 13 11:39:00.918122 kernel: mlx5_core ef6b:00:02.0: firmware version: 16.30.5026 Mar 13 11:39:01.091428 kernel: hv_netvsc 000d3a6d-241f-000d-3a6d-241f000d3a6d eth0: VF registering: eth1 Mar 13 11:39:01.091659 kernel: mlx5_core ef6b:00:02.0 eth1: joined to eth0 Mar 13 11:39:01.098915 kernel: mlx5_core ef6b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 13 11:39:01.107936 kernel: mlx5_core ef6b:00:02.0 enP61291s1: renamed from eth1 Mar 13 11:39:01.788642 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 13 11:39:01.850920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 13 11:39:01.860976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 13 11:39:01.879267 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 11:39:01.896882 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 11:39:01.949840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 13 11:39:02.096453 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 11:39:02.102232 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 11:39:02.112156 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 11:39:02.122774 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 11:39:02.133183 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 11:39:02.166454 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 11:39:02.951928 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 11:39:02.953911 disk-uuid[651]: The operation has completed successfully. Mar 13 11:39:03.031826 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 11:39:03.031947 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 11:39:03.056081 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 11:39:03.078571 sh[828]: Success Mar 13 11:39:03.114977 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 11:39:03.115054 kernel: device-mapper: uevent: version 1.0.3 Mar 13 11:39:03.120412 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 11:39:03.129892 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 13 11:39:03.437490 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 11:39:03.448364 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 11:39:03.461539 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 11:39:03.485913 kernel: BTRFS: device fsid 94058961-6d55-4a24-a8e2-9e6dd21e4051 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (846) Mar 13 11:39:03.496296 kernel: BTRFS info (device dm-0): first mount of filesystem 94058961-6d55-4a24-a8e2-9e6dd21e4051 Mar 13 11:39:03.496350 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 13 11:39:03.719220 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 11:39:03.719308 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 11:39:03.760531 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 11:39:03.765297 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 11:39:03.774816 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 11:39:03.775655 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 11:39:03.802993 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 11:39:03.837960 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (875) Mar 13 11:39:03.849755 kernel: BTRFS info (device sda6): first mount of filesystem 7ab6d3ba-be2c-4427-b549-2d4113cd9928 Mar 13 11:39:03.849823 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 11:39:03.877016 kernel: BTRFS info (device sda6): turning on async discard Mar 13 11:39:03.877096 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 11:39:03.887949 kernel: BTRFS info (device sda6): last unmount of filesystem 7ab6d3ba-be2c-4427-b549-2d4113cd9928 Mar 13 11:39:03.888672 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 11:39:03.899475 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 11:39:03.937415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 11:39:03.949252 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 11:39:03.984430 systemd-networkd[1015]: lo: Link UP Mar 13 11:39:03.984441 systemd-networkd[1015]: lo: Gained carrier Mar 13 11:39:03.985206 systemd-networkd[1015]: Enumeration completed Mar 13 11:39:03.985647 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 11:39:03.985650 systemd-networkd[1015]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 11:39:03.987550 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 11:39:03.992696 systemd[1]: Reached target network.target - Network. Mar 13 11:39:04.062887 kernel: mlx5_core ef6b:00:02.0 enP61291s1: Link up Mar 13 11:39:04.095255 kernel: hv_netvsc 000d3a6d-241f-000d-3a6d-241f000d3a6d eth0: Data path switched to VF: enP61291s1 Mar 13 11:39:04.095010 systemd-networkd[1015]: enP61291s1: Link UP Mar 13 11:39:04.095066 systemd-networkd[1015]: eth0: Link UP Mar 13 11:39:04.095147 systemd-networkd[1015]: eth0: Gained carrier Mar 13 11:39:04.095162 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 11:39:04.115136 systemd-networkd[1015]: enP61291s1: Gained carrier Mar 13 11:39:04.123945 systemd-networkd[1015]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 11:39:05.000110 ignition[968]: Ignition 2.22.0 Mar 13 11:39:05.000124 ignition[968]: Stage: fetch-offline Mar 13 11:39:05.000235 ignition[968]: no configs at "/usr/lib/ignition/base.d" Mar 13 11:39:05.007374 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 11:39:05.000240 ignition[968]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 11:39:05.016591 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 11:39:05.003106 ignition[968]: parsed url from cmdline: "" Mar 13 11:39:05.003110 ignition[968]: no config URL provided Mar 13 11:39:05.003116 ignition[968]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 11:39:05.003133 ignition[968]: no config at "/usr/lib/ignition/user.ign" Mar 13 11:39:05.003137 ignition[968]: failed to fetch config: resource requires networking Mar 13 11:39:05.003440 ignition[968]: Ignition finished successfully Mar 13 11:39:05.053695 ignition[1025]: Ignition 2.22.0 Mar 13 11:39:05.053701 ignition[1025]: Stage: fetch Mar 13 11:39:05.053908 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Mar 13 11:39:05.053915 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 11:39:05.053984 ignition[1025]: parsed url from cmdline: "" Mar 13 11:39:05.053987 ignition[1025]: no config URL provided Mar 13 11:39:05.053990 ignition[1025]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 11:39:05.053995 ignition[1025]: no config at "/usr/lib/ignition/user.ign" Mar 13 11:39:05.054011 ignition[1025]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 13 11:39:05.132242 ignition[1025]: GET result: OK Mar 13 11:39:05.132340 ignition[1025]: config has been read from IMDS userdata Mar 13 11:39:05.132361 ignition[1025]: parsing config with SHA512: 543d9a1c1be8a900224665d75a88cfa8fa5944e1fa2cbf9784036d279c2b009f190920adc49a9f54e9b42ed9611c71991b528bb0a356257e6cdfa5e675592506 Mar 13 11:39:05.136235 unknown[1025]: fetched base config from "system" Mar 13 11:39:05.136576 ignition[1025]: fetch: fetch complete Mar 13 11:39:05.136258 unknown[1025]: fetched base config from "system" Mar 13 11:39:05.136580 ignition[1025]: fetch: fetch passed Mar 13 11:39:05.136261 unknown[1025]: fetched user config from "azure" Mar 13 11:39:05.136625 ignition[1025]: Ignition finished successfully Mar 13 11:39:05.140346 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 11:39:05.148980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 11:39:05.191446 ignition[1032]: Ignition 2.22.0 Mar 13 11:39:05.191464 ignition[1032]: Stage: kargs Mar 13 11:39:05.197432 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 11:39:05.191659 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Mar 13 11:39:05.203021 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 11:39:05.191667 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 11:39:05.192320 ignition[1032]: kargs: kargs passed Mar 13 11:39:05.192366 ignition[1032]: Ignition finished successfully Mar 13 11:39:05.234675 ignition[1038]: Ignition 2.22.0 Mar 13 11:39:05.234691 ignition[1038]: Stage: disks Mar 13 11:39:05.238997 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 11:39:05.234868 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Mar 13 11:39:05.245823 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 11:39:05.234931 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 11:39:05.254531 systemd-networkd[1015]: eth0: Gained IPv6LL Mar 13 11:39:05.235537 ignition[1038]: disks: disks passed Mar 13 11:39:05.254543 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 11:39:05.235583 ignition[1038]: Ignition finished successfully Mar 13 11:39:05.259856 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 11:39:05.267944 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 11:39:05.277532 systemd[1]: Reached target basic.target - Basic System. Mar 13 11:39:05.287087 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 11:39:05.389810 systemd-fsck[1046]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Mar 13 11:39:05.397261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 11:39:05.404020 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 11:39:05.667892 kernel: EXT4-fs (sda9): mounted filesystem 8545df3e-a741-4e4a-85c2-3ae426ec1726 r/w with ordered data mode. Quota mode: none. Mar 13 11:39:05.668331 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 11:39:05.672744 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 11:39:05.699396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 11:39:05.715495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 11:39:05.736888 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1060) Mar 13 11:39:05.737268 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 11:39:05.756982 kernel: BTRFS info (device sda6): first mount of filesystem 7ab6d3ba-be2c-4427-b549-2d4113cd9928 Mar 13 11:39:05.757007 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 11:39:05.760008 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 11:39:05.760054 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 11:39:05.778106 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 11:39:05.786804 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 11:39:05.806821 kernel: BTRFS info (device sda6): turning on async discard Mar 13 11:39:05.806848 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 11:39:05.803540 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 11:39:06.445693 coreos-metadata[1062]: Mar 13 11:39:06.445 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 11:39:06.454005 coreos-metadata[1062]: Mar 13 11:39:06.453 INFO Fetch successful Mar 13 11:39:06.454005 coreos-metadata[1062]: Mar 13 11:39:06.454 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 13 11:39:06.467021 coreos-metadata[1062]: Mar 13 11:39:06.466 INFO Fetch successful Mar 13 11:39:06.489081 coreos-metadata[1062]: Mar 13 11:39:06.488 INFO wrote hostname ci-4459.2.101-83511db97f to /sysroot/etc/hostname Mar 13 11:39:06.496008 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 11:39:06.822753 initrd-setup-root[1090]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 11:39:06.891899 initrd-setup-root[1097]: cut: /sysroot/etc/group: No such file or directory Mar 13 11:39:06.901436 initrd-setup-root[1104]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 11:39:06.924893 initrd-setup-root[1111]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 11:39:08.185696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 11:39:08.191546 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 11:39:08.213685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 11:39:08.226179 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 11:39:08.235379 kernel: BTRFS info (device sda6): last unmount of filesystem 7ab6d3ba-be2c-4427-b549-2d4113cd9928 Mar 13 11:39:08.256363 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 11:39:08.267309 ignition[1181]: INFO : Ignition 2.22.0 Mar 13 11:39:08.267309 ignition[1181]: INFO : Stage: mount Mar 13 11:39:08.274346 ignition[1181]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 11:39:08.274346 ignition[1181]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 11:39:08.274346 ignition[1181]: INFO : mount: mount passed Mar 13 11:39:08.274346 ignition[1181]: INFO : Ignition finished successfully Mar 13 11:39:08.272102 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 11:39:08.279523 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 11:39:08.302995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 11:39:08.330904 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1194) Mar 13 11:39:08.341238 kernel: BTRFS info (device sda6): first mount of filesystem 7ab6d3ba-be2c-4427-b549-2d4113cd9928 Mar 13 11:39:08.341284 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 11:39:08.350474 kernel: BTRFS info (device sda6): turning on async discard Mar 13 11:39:08.350534 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 11:39:08.352038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 11:39:08.380097 ignition[1212]: INFO : Ignition 2.22.0 Mar 13 11:39:08.380097 ignition[1212]: INFO : Stage: files Mar 13 11:39:08.386184 ignition[1212]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 11:39:08.386184 ignition[1212]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 11:39:08.386184 ignition[1212]: DEBUG : files: compiled without relabeling support, skipping Mar 13 11:39:08.401444 ignition[1212]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 11:39:08.401444 ignition[1212]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 11:39:08.435544 ignition[1212]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 11:39:08.441579 ignition[1212]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 11:39:08.441579 ignition[1212]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 11:39:08.436027 unknown[1212]: wrote ssh authorized keys file for user: core Mar 13 11:39:08.462998 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 11:39:08.471058 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 13 11:39:08.498766 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 11:39:08.634473 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 11:39:08.643442 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 11:39:08.643442 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 13 11:39:08.979272 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 11:39:09.191513 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 11:39:09.191513 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 11:39:09.207538 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 13 11:39:09.779854 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 11:39:09.960818 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 11:39:09.960818 ignition[1212]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 11:39:09.978645 ignition[1212]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 11:39:09.988528 ignition[1212]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 11:39:09.988528 ignition[1212]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 11:39:09.988528 ignition[1212]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 13 11:39:09.988528 ignition[1212]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 11:39:09.988528 ignition[1212]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 11:39:09.988528 ignition[1212]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 11:39:09.988528 ignition[1212]: INFO : files: files passed Mar 13 11:39:09.988528 ignition[1212]: INFO : Ignition finished successfully Mar 13 11:39:09.988898 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 11:39:10.003254 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 11:39:10.029853 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 11:39:10.046200 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 11:39:10.054544 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 11:39:10.087638 initrd-setup-root-after-ignition[1241]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 11:39:10.087638 initrd-setup-root-after-ignition[1241]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 11:39:10.083946 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 11:39:10.120971 initrd-setup-root-after-ignition[1245]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 11:39:10.093196 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 11:39:10.104623 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 11:39:10.155853 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 11:39:10.156015 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 11:39:10.165539 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 11:39:10.175008 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 11:39:10.183153 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 11:39:10.185013 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 11:39:10.216643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 11:39:10.223817 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 11:39:10.245591 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 11:39:10.250502 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 11:39:10.259916 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 11:39:10.268158 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 11:39:10.268277 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 11:39:10.280384 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 11:39:10.285061 systemd[1]: Stopped target basic.target - Basic System. Mar 13 11:39:10.293517 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 11:39:10.302229 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 11:39:10.310724 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 11:39:10.319765 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 11:39:10.328954 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 11:39:10.337666 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 11:39:10.346900 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 11:39:10.355195 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 11:39:10.364184 systemd[1]: Stopped target swap.target - Swaps. Mar 13 11:39:10.371715 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 11:39:10.371832 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 11:39:10.383146 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 11:39:10.387768 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 11:39:10.396994 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 11:39:10.401043 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 11:39:10.406556 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 11:39:10.406673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 11:39:10.419888 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 11:39:10.420049 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 11:39:10.429013 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 11:39:10.429151 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 11:39:10.438693 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 11:39:10.497419 ignition[1265]: INFO : Ignition 2.22.0 Mar 13 11:39:10.497419 ignition[1265]: INFO : Stage: umount Mar 13 11:39:10.497419 ignition[1265]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 11:39:10.497419 ignition[1265]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 11:39:10.497419 ignition[1265]: INFO : umount: umount passed Mar 13 11:39:10.497419 ignition[1265]: INFO : Ignition finished successfully Mar 13 11:39:10.438809 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 11:39:10.449988 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 11:39:10.457314 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 11:39:10.457577 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 11:39:10.476482 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 11:39:10.487776 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 11:39:10.487950 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 11:39:10.493284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 11:39:10.493373 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 11:39:10.511208 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 11:39:10.511314 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 11:39:10.521279 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 11:39:10.521372 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 11:39:10.533528 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 11:39:10.533587 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 11:39:10.541944 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 11:39:10.541980 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 11:39:10.552149 systemd[1]: Stopped target network.target - Network. Mar 13 11:39:10.561924 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 11:39:10.561974 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 11:39:10.577993 systemd[1]: Stopped target paths.target - Path Units. Mar 13 11:39:10.587010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 11:39:10.590888 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 11:39:10.596573 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 11:39:10.606166 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 11:39:10.613892 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 11:39:10.613938 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 11:39:10.623708 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 11:39:10.623734 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 11:39:10.632520 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 11:39:10.632574 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 11:39:10.640523 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 11:39:10.640552 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 11:39:10.649126 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 11:39:10.657038 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 11:39:10.667303 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 11:39:10.667811 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 11:39:10.667919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 11:39:10.680243 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 11:39:10.680339 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 11:39:10.692027 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 11:39:10.878406 kernel: hv_netvsc 000d3a6d-241f-000d-3a6d-241f000d3a6d eth0: Data path switched from VF: enP61291s1 Mar 13 11:39:10.692265 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 11:39:10.692361 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 11:39:10.707818 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 11:39:10.708500 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 11:39:10.717630 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 11:39:10.717672 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 11:39:10.738043 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 11:39:10.750120 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 11:39:10.750194 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 11:39:10.759143 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 11:39:10.759190 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 11:39:10.771384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 11:39:10.771434 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 11:39:10.777499 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 11:39:10.777553 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 11:39:10.791014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 11:39:10.799207 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 11:39:10.799271 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 11:39:10.818643 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 11:39:10.818851 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 11:39:10.828548 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 11:39:10.828595 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 11:39:10.837094 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 11:39:10.837122 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 11:39:10.841694 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 11:39:10.841743 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 11:39:10.855267 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 11:39:10.855312 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 11:39:10.876940 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 11:39:10.877009 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 11:39:10.893114 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 11:39:10.899540 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 11:39:10.899615 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 11:39:10.910324 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 11:39:10.910374 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 11:39:10.925075 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 11:39:10.925146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:39:10.940220 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 11:39:10.940274 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 11:39:10.940303 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 11:39:10.940619 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 11:39:10.941898 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 11:39:10.948028 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 11:39:10.948116 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 11:39:10.956182 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 11:39:10.956257 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 11:39:10.967569 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 11:39:10.975520 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 11:39:10.975618 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 11:39:10.985621 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 11:39:11.124011 systemd[1]: Switching root. Mar 13 11:39:11.230283 systemd-journald[225]: Journal stopped Mar 13 11:39:16.339334 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Mar 13 11:39:16.339355 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 11:39:16.339364 kernel: SELinux: policy capability open_perms=1 Mar 13 11:39:16.339369 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 11:39:16.339375 kernel: SELinux: policy capability always_check_network=0 Mar 13 11:39:16.339381 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 11:39:16.339387 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 11:39:16.339393 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 11:39:16.339398 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 11:39:16.339403 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 11:39:16.339409 kernel: audit: type=1403 audit(1773401952.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 11:39:16.339415 systemd[1]: Successfully loaded SELinux policy in 175.619ms. Mar 13 11:39:16.339422 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.510ms. Mar 13 11:39:16.339429 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 11:39:16.339435 systemd[1]: Detected virtualization microsoft. Mar 13 11:39:16.339442 systemd[1]: Detected architecture arm64. Mar 13 11:39:16.339448 systemd[1]: Detected first boot. Mar 13 11:39:16.339454 systemd[1]: Hostname set to . Mar 13 11:39:16.339460 systemd[1]: Initializing machine ID from random generator. Mar 13 11:39:16.339466 zram_generator::config[1309]: No configuration found. Mar 13 11:39:16.339472 kernel: NET: Registered PF_VSOCK protocol family Mar 13 11:39:16.339479 systemd[1]: Populated /etc with preset unit settings. Mar 13 11:39:16.339485 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 11:39:16.339492 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 11:39:16.339498 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 11:39:16.339504 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 11:39:16.339510 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 11:39:16.339516 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 11:39:16.339522 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 11:39:16.339528 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 11:39:16.339535 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 11:39:16.339542 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 11:39:16.339548 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 11:39:16.339554 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 11:39:16.339560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 11:39:16.339566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 11:39:16.339573 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 11:39:16.339579 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 11:39:16.339586 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 11:39:16.339592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 11:39:16.339600 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 13 11:39:16.339606 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 11:39:16.339613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 11:39:16.339619 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 11:39:16.339625 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 11:39:16.339632 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 11:39:16.339639 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 11:39:16.339645 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 11:39:16.339651 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 11:39:16.339657 systemd[1]: Reached target slices.target - Slice Units. Mar 13 11:39:16.339663 systemd[1]: Reached target swap.target - Swaps. Mar 13 11:39:16.339669 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 11:39:16.339675 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 11:39:16.339682 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 11:39:16.339688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 11:39:16.339695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 11:39:16.339701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 11:39:16.339707 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 11:39:16.339713 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 11:39:16.339720 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 11:39:16.339726 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 11:39:16.339733 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 11:39:16.339739 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 11:39:16.339746 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 11:39:16.339752 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 11:39:16.339759 systemd[1]: Reached target machines.target - Containers. Mar 13 11:39:16.339765 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 11:39:16.339772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 11:39:16.339778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 11:39:16.339785 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 11:39:16.339791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 11:39:16.339797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 11:39:16.339803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 11:39:16.339809 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 11:39:16.339816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 11:39:16.339823 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 11:39:16.339829 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 11:39:16.339835 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 11:39:16.339842 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 11:39:16.339848 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 11:39:16.339854 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 11:39:16.339861 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 11:39:16.339866 kernel: loop: module loaded Mar 13 11:39:16.340991 kernel: ACPI: bus type drm_connector registered Mar 13 11:39:16.341008 kernel: fuse: init (API version 7.41) Mar 13 11:39:16.341016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 11:39:16.341024 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 11:39:16.341031 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 11:39:16.341037 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 11:39:16.341074 systemd-journald[1413]: Collecting audit messages is disabled. Mar 13 11:39:16.341093 systemd-journald[1413]: Journal started Mar 13 11:39:16.341109 systemd-journald[1413]: Runtime Journal (/run/log/journal/b6673a960a9d4d08bd3494deb7ff6dc9) is 8M, max 78.3M, 70.3M free. Mar 13 11:39:15.478765 systemd[1]: Queued start job for default target multi-user.target. Mar 13 11:39:15.496577 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 11:39:15.497070 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 11:39:15.497368 systemd[1]: systemd-journald.service: Consumed 2.549s CPU time. Mar 13 11:39:16.360111 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 11:39:16.369337 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 11:39:16.369410 systemd[1]: Stopped verity-setup.service. Mar 13 11:39:16.385773 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 11:39:16.386487 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 11:39:16.391084 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 11:39:16.396004 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 11:39:16.400045 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 11:39:16.405031 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 11:39:16.410178 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 11:39:16.415908 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 11:39:16.421481 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 11:39:16.427918 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 11:39:16.428070 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 11:39:16.434083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 11:39:16.434213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 11:39:16.440375 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 11:39:16.440517 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 11:39:16.445709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 11:39:16.445840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 11:39:16.451359 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 11:39:16.451494 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 11:39:16.456448 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 11:39:16.456581 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 11:39:16.462198 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 11:39:16.467813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 11:39:16.474354 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 11:39:16.480144 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 11:39:16.485945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 11:39:16.501573 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 11:39:16.508545 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 11:39:16.520548 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 11:39:16.527679 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 11:39:16.527720 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 11:39:16.533204 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 11:39:16.541096 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 11:39:16.547134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 11:39:16.550029 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 11:39:16.562156 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 11:39:16.568112 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 11:39:16.569139 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 11:39:16.574239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 11:39:16.577059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 11:39:16.588663 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 11:39:16.596564 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 11:39:16.603094 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 11:39:16.610537 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 11:39:16.619145 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 11:39:16.623044 systemd-journald[1413]: Time spent on flushing to /var/log/journal/b6673a960a9d4d08bd3494deb7ff6dc9 is 10.693ms for 934 entries. Mar 13 11:39:16.623044 systemd-journald[1413]: System Journal (/var/log/journal/b6673a960a9d4d08bd3494deb7ff6dc9) is 8M, max 2.6G, 2.6G free. Mar 13 11:39:16.682244 systemd-journald[1413]: Received client request to flush runtime journal. Mar 13 11:39:16.682308 kernel: loop0: detected capacity change from 0 to 27936 Mar 13 11:39:16.632404 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 11:39:16.640060 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 11:39:16.684437 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 11:39:16.715349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 11:39:16.722617 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 11:39:16.731195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 11:39:16.752681 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 11:39:16.753784 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 11:39:16.878449 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Mar 13 11:39:16.878462 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Mar 13 11:39:16.882039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 11:39:17.131951 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 11:39:17.229900 kernel: loop1: detected capacity change from 0 to 119840 Mar 13 11:39:17.315919 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 11:39:17.322981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 11:39:17.351294 systemd-udevd[1473]: Using default interface naming scheme 'v255'. Mar 13 11:39:17.557717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 11:39:17.568760 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 11:39:17.625773 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 11:39:17.631690 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 13 11:39:17.664922 kernel: loop2: detected capacity change from 0 to 100632 Mar 13 11:39:17.718941 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 11:39:17.750803 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 11:39:17.767936 kernel: hv_vmbus: registering driver hv_balloon Mar 13 11:39:17.768057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#22 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 11:39:17.774823 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 13 11:39:17.778124 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 13 11:39:17.781965 kernel: hv_vmbus: registering driver hyperv_fb Mar 13 11:39:17.787492 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 13 11:39:17.795681 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 13 11:39:17.803534 kernel: Console: switching to colour dummy device 80x25 Mar 13 11:39:17.812625 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 11:39:17.893078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 11:39:17.905775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 11:39:17.905975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:39:17.915607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 11:39:17.926602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 11:39:17.926775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:39:17.939061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 11:39:17.955520 systemd-networkd[1489]: lo: Link UP Mar 13 11:39:17.955528 systemd-networkd[1489]: lo: Gained carrier Mar 13 11:39:17.956553 systemd-networkd[1489]: Enumeration completed Mar 13 11:39:17.956674 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 11:39:17.957249 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 11:39:17.957319 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 11:39:17.967637 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 11:39:17.979498 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 11:39:18.023926 kernel: mlx5_core ef6b:00:02.0 enP61291s1: Link up Mar 13 11:39:18.038969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 11:39:18.050462 kernel: hv_netvsc 000d3a6d-241f-000d-3a6d-241f000d3a6d eth0: Data path switched to VF: enP61291s1 Mar 13 11:39:18.051279 systemd-networkd[1489]: enP61291s1: Link UP Mar 13 11:39:18.051425 systemd-networkd[1489]: eth0: Link UP Mar 13 11:39:18.051428 systemd-networkd[1489]: eth0: Gained carrier Mar 13 11:39:18.051451 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 11:39:18.052106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 11:39:18.059811 systemd-networkd[1489]: enP61291s1: Gained carrier Mar 13 11:39:18.064934 systemd-networkd[1489]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 11:39:18.065455 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 11:39:18.079895 kernel: MACsec IEEE 802.1AE Mar 13 11:39:18.121710 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 11:39:18.131957 kernel: loop3: detected capacity change from 0 to 200864 Mar 13 11:39:18.181897 kernel: loop4: detected capacity change from 0 to 27936 Mar 13 11:39:18.198894 kernel: loop5: detected capacity change from 0 to 119840 Mar 13 11:39:18.215887 kernel: loop6: detected capacity change from 0 to 100632 Mar 13 11:39:18.228887 kernel: loop7: detected capacity change from 0 to 200864 Mar 13 11:39:18.245959 (sd-merge)[1618]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 13 11:39:18.246392 (sd-merge)[1618]: Merged extensions into '/usr'. Mar 13 11:39:18.250068 systemd[1]: Reload requested from client PID 1448 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 11:39:18.250087 systemd[1]: Reloading... Mar 13 11:39:18.310937 zram_generator::config[1648]: No configuration found. Mar 13 11:39:18.490313 systemd[1]: Reloading finished in 239 ms. Mar 13 11:39:18.509113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 11:39:18.514418 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 11:39:18.526151 systemd[1]: Starting ensure-sysext.service... Mar 13 11:39:18.533056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 11:39:18.550213 systemd[1]: Reload requested from client PID 1706 ('systemctl') (unit ensure-sysext.service)... Mar 13 11:39:18.550363 systemd[1]: Reloading... Mar 13 11:39:18.554046 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 11:39:18.554069 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 11:39:18.554229 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 11:39:18.554362 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 11:39:18.554777 systemd-tmpfiles[1707]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 11:39:18.554936 systemd-tmpfiles[1707]: ACLs are not supported, ignoring. Mar 13 11:39:18.554964 systemd-tmpfiles[1707]: ACLs are not supported, ignoring. Mar 13 11:39:18.590652 systemd-tmpfiles[1707]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 11:39:18.590666 systemd-tmpfiles[1707]: Skipping /boot Mar 13 11:39:18.597470 systemd-tmpfiles[1707]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 11:39:18.597486 systemd-tmpfiles[1707]: Skipping /boot Mar 13 11:39:18.632899 zram_generator::config[1750]: No configuration found. Mar 13 11:39:18.773606 systemd[1]: Reloading finished in 222 ms. Mar 13 11:39:18.784528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 11:39:18.802995 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 11:39:18.825020 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 11:39:18.841071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 11:39:18.855084 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 11:39:18.861811 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 11:39:18.869863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 11:39:18.878138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 11:39:18.887921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 11:39:18.902261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 11:39:18.910711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 11:39:18.910842 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 11:39:18.914413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 11:39:18.915976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 11:39:18.922936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 11:39:18.925182 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 11:39:18.931374 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 11:39:18.931551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 11:39:18.945516 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 11:39:18.953056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 11:39:18.954306 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 11:39:18.965205 systemd-resolved[1804]: Positive Trust Anchors: Mar 13 11:39:18.965222 systemd-resolved[1804]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 11:39:18.965242 systemd-resolved[1804]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 11:39:18.966991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 11:39:18.975074 systemd-resolved[1804]: Using system hostname 'ci-4459.2.101-83511db97f'. Mar 13 11:39:18.977045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 11:39:18.983911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 11:39:18.984056 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 11:39:18.985302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 11:39:18.992709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 11:39:18.993911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 11:39:19.000590 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 11:39:19.007797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 11:39:19.008113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 11:39:19.014725 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 11:39:19.016916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 11:39:19.027167 systemd[1]: Reached target network.target - Network. Mar 13 11:39:19.031119 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 11:39:19.036445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 11:39:19.037801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 11:39:19.046101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 11:39:19.053818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 11:39:19.061269 augenrules[1837]: No rules Mar 13 11:39:19.066142 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 11:39:19.072945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 11:39:19.073073 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 11:39:19.073183 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 11:39:19.078858 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 11:39:19.081058 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 11:39:19.086710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 11:39:19.086897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 11:39:19.092252 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 11:39:19.092415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 11:39:19.097712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 11:39:19.097866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 11:39:19.103701 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 11:39:19.103865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 11:39:19.113950 systemd[1]: Finished ensure-sysext.service. Mar 13 11:39:19.120051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 11:39:19.120129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 11:39:19.647235 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 11:39:19.653279 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 11:39:19.973011 systemd-networkd[1489]: eth0: Gained IPv6LL Mar 13 11:39:19.979377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 11:39:19.986384 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 11:39:22.129627 ldconfig[1443]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 11:39:22.142114 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 11:39:22.148746 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 11:39:22.161921 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 11:39:22.167019 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 11:39:22.171700 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 11:39:22.176910 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 11:39:22.182700 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 11:39:22.187208 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 11:39:22.192480 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 11:39:22.198250 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 11:39:22.198285 systemd[1]: Reached target paths.target - Path Units. Mar 13 11:39:22.202471 systemd[1]: Reached target timers.target - Timer Units. Mar 13 11:39:22.223344 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 11:39:22.229564 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 11:39:22.235446 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 11:39:22.241133 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 11:39:22.246694 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 11:39:22.252963 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 11:39:22.257457 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 11:39:22.263205 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 11:39:22.267826 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 11:39:22.271671 systemd[1]: Reached target basic.target - Basic System. Mar 13 11:39:22.275570 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 11:39:22.275595 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 11:39:22.293782 systemd[1]: Starting chronyd.service - NTP client/server... Mar 13 11:39:22.303597 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 11:39:22.313116 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 11:39:22.321772 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 11:39:22.329298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 11:39:22.339292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 11:39:22.345484 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 11:39:22.349831 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 11:39:22.352025 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 13 11:39:22.358289 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 13 11:39:22.359606 chronyd[1856]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 13 11:39:22.362427 KVP[1866]: KVP starting; pid is:1866 Mar 13 11:39:22.363758 jq[1864]: false Mar 13 11:39:22.364191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:39:22.375696 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 11:39:22.383027 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 11:39:22.389906 kernel: hv_utils: KVP IC version 4.0 Mar 13 11:39:22.389183 KVP[1866]: KVP LIC Version: 3.1 Mar 13 11:39:22.392230 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 11:39:22.397123 extend-filesystems[1865]: Found /dev/sda6 Mar 13 11:39:22.402902 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 11:39:22.410142 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 11:39:22.418313 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 11:39:22.425404 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 11:39:22.427087 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 11:39:22.427944 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 11:39:22.435527 extend-filesystems[1865]: Found /dev/sda9 Mar 13 11:39:22.440516 extend-filesystems[1865]: Checking size of /dev/sda9 Mar 13 11:39:22.440657 chronyd[1856]: Timezone right/UTC failed leap second check, ignoring Mar 13 11:39:22.444967 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 11:39:22.440852 chronyd[1856]: Loaded seccomp filter (level 2) Mar 13 11:39:22.455897 jq[1890]: true Mar 13 11:39:22.454180 systemd[1]: Started chronyd.service - NTP client/server. Mar 13 11:39:22.462049 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 11:39:22.468976 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 11:39:22.473908 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 11:39:22.476380 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 11:39:22.476581 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 11:39:22.491913 extend-filesystems[1865]: Old size kept for /dev/sda9 Mar 13 11:39:22.488034 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 11:39:22.518671 update_engine[1884]: I20260313 11:39:22.512202 1884 main.cc:92] Flatcar Update Engine starting Mar 13 11:39:22.488237 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 11:39:22.502647 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 11:39:22.504856 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 11:39:22.520333 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 11:39:22.528981 jq[1902]: true Mar 13 11:39:22.538953 (ntainerd)[1906]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 11:39:22.549633 systemd-logind[1879]: New seat seat0. Mar 13 11:39:22.553063 systemd-logind[1879]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 11:39:22.553263 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 11:39:22.606563 tar[1897]: linux-arm64/LICENSE Mar 13 11:39:22.606563 tar[1897]: linux-arm64/helm Mar 13 11:39:22.679969 dbus-daemon[1859]: [system] SELinux support is enabled Mar 13 11:39:22.680179 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 11:39:22.685513 update_engine[1884]: I20260313 11:39:22.685245 1884 update_check_scheduler.cc:74] Next update check in 7m51s Mar 13 11:39:22.690698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 11:39:22.691218 dbus-daemon[1859]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 11:39:22.690726 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 11:39:22.700307 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 11:39:22.700336 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 11:39:22.709777 systemd[1]: Started update-engine.service - Update Engine. Mar 13 11:39:22.721553 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 11:39:22.737342 bash[1953]: Updated "/home/core/.ssh/authorized_keys" Mar 13 11:39:22.739942 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 11:39:22.753607 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 11:39:22.780918 coreos-metadata[1858]: Mar 13 11:39:22.780 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 11:39:22.783917 coreos-metadata[1858]: Mar 13 11:39:22.783 INFO Fetch successful Mar 13 11:39:22.783917 coreos-metadata[1858]: Mar 13 11:39:22.783 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 13 11:39:22.788943 coreos-metadata[1858]: Mar 13 11:39:22.788 INFO Fetch successful Mar 13 11:39:22.788943 coreos-metadata[1858]: Mar 13 11:39:22.788 INFO Fetching http://168.63.129.16/machine/d7b6ad66-87cd-44bb-a2d2-a75b021f3b3a/23390020%2D93ec%2D40f1%2D85a4%2D1cd2ca6ffd73.%5Fci%2D4459.2.101%2D83511db97f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 13 11:39:22.790589 coreos-metadata[1858]: Mar 13 11:39:22.790 INFO Fetch successful Mar 13 11:39:22.790589 coreos-metadata[1858]: Mar 13 11:39:22.790 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 13 11:39:22.800769 coreos-metadata[1858]: Mar 13 11:39:22.800 INFO Fetch successful Mar 13 11:39:22.875888 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 11:39:22.883612 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 11:39:23.010915 tar[1897]: linux-arm64/README.md Mar 13 11:39:23.028307 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 11:39:23.054460 locksmithd[1975]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 11:39:23.198145 sshd_keygen[1891]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 11:39:23.219393 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 11:39:23.227092 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 11:39:23.240215 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 13 11:39:23.248650 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 11:39:23.250067 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 11:39:23.263132 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 11:39:23.282696 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 13 11:39:23.288696 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 11:39:23.298265 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 11:39:23.307180 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 13 11:39:23.312796 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 11:39:23.363767 containerd[1906]: time="2026-03-13T11:39:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 11:39:23.365898 containerd[1906]: time="2026-03-13T11:39:23.364940688Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 11:39:23.371233 containerd[1906]: time="2026-03-13T11:39:23.371182008Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.792µs" Mar 13 11:39:23.371423 containerd[1906]: time="2026-03-13T11:39:23.371402768Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 11:39:23.371475 containerd[1906]: time="2026-03-13T11:39:23.371464376Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 11:39:23.371681 containerd[1906]: time="2026-03-13T11:39:23.371663384Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 11:39:23.371747 containerd[1906]: time="2026-03-13T11:39:23.371737464Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 11:39:23.371799 containerd[1906]: time="2026-03-13T11:39:23.371790504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 11:39:23.371934 containerd[1906]: time="2026-03-13T11:39:23.371917392Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 11:39:23.371986 containerd[1906]: time="2026-03-13T11:39:23.371975456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 11:39:23.372301 containerd[1906]: time="2026-03-13T11:39:23.372274416Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 11:39:23.372358 containerd[1906]: time="2026-03-13T11:39:23.372348432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 11:39:23.372402 containerd[1906]: time="2026-03-13T11:39:23.372392392Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 11:39:23.372443 containerd[1906]: time="2026-03-13T11:39:23.372433568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 11:39:23.372578 containerd[1906]: time="2026-03-13T11:39:23.372562384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 11:39:23.372842 containerd[1906]: time="2026-03-13T11:39:23.372821888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 11:39:23.372970 containerd[1906]: time="2026-03-13T11:39:23.372955088Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 11:39:23.373039 containerd[1906]: time="2026-03-13T11:39:23.373025712Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 11:39:23.373113 containerd[1906]: time="2026-03-13T11:39:23.373102696Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 11:39:23.373407 containerd[1906]: time="2026-03-13T11:39:23.373363104Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 11:39:23.373504 containerd[1906]: time="2026-03-13T11:39:23.373486008Z" level=info msg="metadata content store policy set" policy=shared Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392047416Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392135272Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392146288Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392154888Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392163384Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392170288Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392184544Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392192592Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392200456Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392206976Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392212376Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392221864Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392377168Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 11:39:23.395605 containerd[1906]: time="2026-03-13T11:39:23.392392544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392404032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392412168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392419016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392427368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392434800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392440864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392454904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392461808Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392468096Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392515960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392527568Z" level=info msg="Start snapshots syncer" Mar 13 11:39:23.395923 containerd[1906]: time="2026-03-13T11:39:23.392545160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 11:39:23.396099 containerd[1906]: time="2026-03-13T11:39:23.392741392Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 11:39:23.396099 containerd[1906]: time="2026-03-13T11:39:23.392786640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392823648Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392942280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392959096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392965968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392972488Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392982472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392989512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.392996808Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.393017824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.393025568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.393032872Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.393055472Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.393067272Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 11:39:23.396173 containerd[1906]: time="2026-03-13T11:39:23.393073032Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393078872Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393083192Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393088440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393094560Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393108728Z" level=info msg="runtime interface created" Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393113256Z" level=info msg="created NRI interface" Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393123152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393131920Z" level=info msg="Connect containerd service" Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393146176Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 11:39:23.396334 containerd[1906]: time="2026-03-13T11:39:23.393759520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 11:39:23.405349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:39:23.420457 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 11:39:23.693149 containerd[1906]: time="2026-03-13T11:39:23.693101328Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 11:39:23.693149 containerd[1906]: time="2026-03-13T11:39:23.693162608Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 11:39:23.693524 containerd[1906]: time="2026-03-13T11:39:23.693129824Z" level=info msg="Start subscribing containerd event" Mar 13 11:39:23.693524 containerd[1906]: time="2026-03-13T11:39:23.693444120Z" level=info msg="Start recovering state" Mar 13 11:39:23.695935 containerd[1906]: time="2026-03-13T11:39:23.694700072Z" level=info msg="Start event monitor" Mar 13 11:39:23.695935 containerd[1906]: time="2026-03-13T11:39:23.694741784Z" level=info msg="Start cni network conf syncer for default" Mar 13 11:39:23.695935 containerd[1906]: time="2026-03-13T11:39:23.694750448Z" level=info msg="Start streaming server" Mar 13 11:39:23.695935 containerd[1906]: time="2026-03-13T11:39:23.694757200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 11:39:23.695935 containerd[1906]: time="2026-03-13T11:39:23.694765528Z" level=info msg="runtime interface starting up..." Mar 13 11:39:23.695935 containerd[1906]: time="2026-03-13T11:39:23.694770320Z" level=info msg="starting plugins..." Mar 13 11:39:23.695935 containerd[1906]: time="2026-03-13T11:39:23.694787984Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 11:39:23.695051 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 11:39:23.700264 containerd[1906]: time="2026-03-13T11:39:23.700225848Z" level=info msg="containerd successfully booted in 0.336808s" Mar 13 11:39:23.701700 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 11:39:23.708936 systemd[1]: Startup finished in 1.644s (kernel) + 13.043s (initrd) + 11.607s (userspace) = 26.295s. Mar 13 11:39:23.727057 kubelet[2051]: E0313 11:39:23.727003 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 11:39:23.729299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 11:39:23.729415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 11:39:23.729711 systemd[1]: kubelet.service: Consumed 518ms CPU time, 247.6M memory peak. Mar 13 11:39:24.080186 login[2039]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 13 11:39:24.082148 login[2040]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:24.088490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 11:39:24.089444 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 11:39:24.095758 systemd-logind[1879]: New session 2 of user core. Mar 13 11:39:24.129749 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 11:39:24.132068 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 11:39:24.156770 (systemd)[2074]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 11:39:24.159153 systemd-logind[1879]: New session c1 of user core. Mar 13 11:39:24.286662 systemd[2074]: Queued start job for default target default.target. Mar 13 11:39:24.293765 systemd[2074]: Created slice app.slice - User Application Slice. Mar 13 11:39:24.293794 systemd[2074]: Reached target paths.target - Paths. Mar 13 11:39:24.293827 systemd[2074]: Reached target timers.target - Timers. Mar 13 11:39:24.295039 systemd[2074]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 11:39:24.305154 systemd[2074]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 11:39:24.305260 systemd[2074]: Reached target sockets.target - Sockets. Mar 13 11:39:24.305304 systemd[2074]: Reached target basic.target - Basic System. Mar 13 11:39:24.305327 systemd[2074]: Reached target default.target - Main User Target. Mar 13 11:39:24.305348 systemd[2074]: Startup finished in 140ms. Mar 13 11:39:24.305407 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 11:39:24.306469 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 11:39:25.080578 login[2039]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:25.084613 systemd-logind[1879]: New session 1 of user core. Mar 13 11:39:25.095010 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 11:39:25.202686 waagent[2037]: 2026-03-13T11:39:25.202601Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Mar 13 11:39:25.207585 waagent[2037]: 2026-03-13T11:39:25.207533Z INFO Daemon Daemon OS: flatcar 4459.2.101 Mar 13 11:39:25.211349 waagent[2037]: 2026-03-13T11:39:25.211309Z INFO Daemon Daemon Python: 3.11.13 Mar 13 11:39:25.215031 waagent[2037]: 2026-03-13T11:39:25.214979Z INFO Daemon Daemon Run daemon Mar 13 11:39:25.218217 waagent[2037]: 2026-03-13T11:39:25.218177Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.101' Mar 13 11:39:25.225233 waagent[2037]: 2026-03-13T11:39:25.225186Z INFO Daemon Daemon Using waagent for provisioning Mar 13 11:39:25.229619 waagent[2037]: 2026-03-13T11:39:25.229571Z INFO Daemon Daemon Activate resource disk Mar 13 11:39:25.233114 waagent[2037]: 2026-03-13T11:39:25.233076Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 13 11:39:25.241415 waagent[2037]: 2026-03-13T11:39:25.241362Z INFO Daemon Daemon Found device: None Mar 13 11:39:25.244847 waagent[2037]: 2026-03-13T11:39:25.244805Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 13 11:39:25.250884 waagent[2037]: 2026-03-13T11:39:25.250845Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 13 11:39:25.259359 waagent[2037]: 2026-03-13T11:39:25.259315Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 11:39:25.263788 waagent[2037]: 2026-03-13T11:39:25.263753Z INFO Daemon Daemon Running default provisioning handler Mar 13 11:39:25.273419 waagent[2037]: 2026-03-13T11:39:25.273362Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 13 11:39:25.284044 waagent[2037]: 2026-03-13T11:39:25.283994Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 13 11:39:25.291483 waagent[2037]: 2026-03-13T11:39:25.291443Z INFO Daemon Daemon cloud-init is enabled: False Mar 13 11:39:25.295507 waagent[2037]: 2026-03-13T11:39:25.295477Z INFO Daemon Daemon Copying ovf-env.xml Mar 13 11:39:25.367142 waagent[2037]: 2026-03-13T11:39:25.366936Z INFO Daemon Daemon Successfully mounted dvd Mar 13 11:39:25.395661 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 13 11:39:25.397665 waagent[2037]: 2026-03-13T11:39:25.397601Z INFO Daemon Daemon Detect protocol endpoint Mar 13 11:39:25.401432 waagent[2037]: 2026-03-13T11:39:25.401388Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 11:39:25.405780 waagent[2037]: 2026-03-13T11:39:25.405744Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 13 11:39:25.410854 waagent[2037]: 2026-03-13T11:39:25.410817Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 13 11:39:25.414927 waagent[2037]: 2026-03-13T11:39:25.414891Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 13 11:39:25.418949 waagent[2037]: 2026-03-13T11:39:25.418919Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 13 11:39:25.466528 waagent[2037]: 2026-03-13T11:39:25.466484Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 13 11:39:25.471734 waagent[2037]: 2026-03-13T11:39:25.471704Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 13 11:39:25.475815 waagent[2037]: 2026-03-13T11:39:25.475778Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 13 11:39:25.613398 waagent[2037]: 2026-03-13T11:39:25.613307Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 13 11:39:25.618768 waagent[2037]: 2026-03-13T11:39:25.618683Z INFO Daemon Daemon Forcing an update of the goal state. Mar 13 11:39:25.626545 waagent[2037]: 2026-03-13T11:39:25.626499Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 11:39:25.647401 waagent[2037]: 2026-03-13T11:39:25.647361Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 13 11:39:25.651940 waagent[2037]: 2026-03-13T11:39:25.651902Z INFO Daemon Mar 13 11:39:25.654276 waagent[2037]: 2026-03-13T11:39:25.654245Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 27d8efd9-01a1-4442-987b-080057afbc0b eTag: 8208944281166479673 source: Fabric] Mar 13 11:39:25.663287 waagent[2037]: 2026-03-13T11:39:25.663251Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 13 11:39:25.668343 waagent[2037]: 2026-03-13T11:39:25.668310Z INFO Daemon Mar 13 11:39:25.670458 waagent[2037]: 2026-03-13T11:39:25.670430Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 13 11:39:25.679629 waagent[2037]: 2026-03-13T11:39:25.679598Z INFO Daemon Daemon Downloading artifacts profile blob Mar 13 11:39:25.742578 waagent[2037]: 2026-03-13T11:39:25.742495Z INFO Daemon Downloaded certificate {'thumbprint': '230C23EDC80BBDBB3739240938588F971A5E26D4', 'hasPrivateKey': True} Mar 13 11:39:25.750108 waagent[2037]: 2026-03-13T11:39:25.750062Z INFO Daemon Fetch goal state completed Mar 13 11:39:25.760178 waagent[2037]: 2026-03-13T11:39:25.760142Z INFO Daemon Daemon Starting provisioning Mar 13 11:39:25.764297 waagent[2037]: 2026-03-13T11:39:25.764254Z INFO Daemon Daemon Handle ovf-env.xml. Mar 13 11:39:25.768135 waagent[2037]: 2026-03-13T11:39:25.768098Z INFO Daemon Daemon Set hostname [ci-4459.2.101-83511db97f] Mar 13 11:39:25.774358 waagent[2037]: 2026-03-13T11:39:25.774303Z INFO Daemon Daemon Publish hostname [ci-4459.2.101-83511db97f] Mar 13 11:39:25.779236 waagent[2037]: 2026-03-13T11:39:25.779194Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 13 11:39:25.784237 waagent[2037]: 2026-03-13T11:39:25.784202Z INFO Daemon Daemon Primary interface is [eth0] Mar 13 11:39:25.794505 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 11:39:25.794511 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 11:39:25.794542 systemd-networkd[1489]: eth0: DHCP lease lost Mar 13 11:39:25.796149 waagent[2037]: 2026-03-13T11:39:25.796077Z INFO Daemon Daemon Create user account if not exists Mar 13 11:39:25.800739 waagent[2037]: 2026-03-13T11:39:25.800683Z INFO Daemon Daemon User core already exists, skip useradd Mar 13 11:39:25.805333 waagent[2037]: 2026-03-13T11:39:25.805280Z INFO Daemon Daemon Configure sudoer Mar 13 11:39:25.813659 waagent[2037]: 2026-03-13T11:39:25.813603Z INFO Daemon Daemon Configure sshd Mar 13 11:39:25.821816 waagent[2037]: 2026-03-13T11:39:25.821764Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 13 11:39:25.821948 systemd-networkd[1489]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 11:39:25.831589 waagent[2037]: 2026-03-13T11:39:25.831530Z INFO Daemon Daemon Deploy ssh public key. Mar 13 11:39:26.926972 waagent[2037]: 2026-03-13T11:39:26.926900Z INFO Daemon Daemon Provisioning complete Mar 13 11:39:26.942847 waagent[2037]: 2026-03-13T11:39:26.942798Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 13 11:39:26.948391 waagent[2037]: 2026-03-13T11:39:26.948345Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 13 11:39:26.956790 waagent[2037]: 2026-03-13T11:39:26.956752Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Mar 13 11:39:27.062929 waagent[2124]: 2026-03-13T11:39:27.061908Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Mar 13 11:39:27.062929 waagent[2124]: 2026-03-13T11:39:27.062067Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.101 Mar 13 11:39:27.062929 waagent[2124]: 2026-03-13T11:39:27.062107Z INFO ExtHandler ExtHandler Python: 3.11.13 Mar 13 11:39:27.062929 waagent[2124]: 2026-03-13T11:39:27.062144Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 13 11:39:27.102268 waagent[2124]: 2026-03-13T11:39:27.102178Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.101; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Mar 13 11:39:27.102428 waagent[2124]: 2026-03-13T11:39:27.102400Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 11:39:27.102474 waagent[2124]: 2026-03-13T11:39:27.102452Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 11:39:27.108833 waagent[2124]: 2026-03-13T11:39:27.108777Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 11:39:27.114716 waagent[2124]: 2026-03-13T11:39:27.114675Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 13 11:39:27.115184 waagent[2124]: 2026-03-13T11:39:27.115150Z INFO ExtHandler Mar 13 11:39:27.115240 waagent[2124]: 2026-03-13T11:39:27.115221Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 454f6276-7bc7-4e71-8252-57f7499cc2f2 eTag: 8208944281166479673 source: Fabric] Mar 13 11:39:27.115462 waagent[2124]: 2026-03-13T11:39:27.115435Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 13 11:39:27.115910 waagent[2124]: 2026-03-13T11:39:27.115859Z INFO ExtHandler Mar 13 11:39:27.115956 waagent[2124]: 2026-03-13T11:39:27.115937Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 13 11:39:27.119768 waagent[2124]: 2026-03-13T11:39:27.119736Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 13 11:39:27.179703 waagent[2124]: 2026-03-13T11:39:27.179559Z INFO ExtHandler Downloaded certificate {'thumbprint': '230C23EDC80BBDBB3739240938588F971A5E26D4', 'hasPrivateKey': True} Mar 13 11:39:27.180114 waagent[2124]: 2026-03-13T11:39:27.180075Z INFO ExtHandler Fetch goal state completed Mar 13 11:39:27.194121 waagent[2124]: 2026-03-13T11:39:27.194046Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4 27 Jan 2026 (Library: OpenSSL 3.4.4 27 Jan 2026) Mar 13 11:39:27.198111 waagent[2124]: 2026-03-13T11:39:27.198048Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2124 Mar 13 11:39:27.198226 waagent[2124]: 2026-03-13T11:39:27.198200Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 13 11:39:27.198516 waagent[2124]: 2026-03-13T11:39:27.198480Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Mar 13 11:39:27.199698 waagent[2124]: 2026-03-13T11:39:27.199654Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.101', '', 'Flatcar Container Linux by Kinvolk'] Mar 13 11:39:27.200073 waagent[2124]: 2026-03-13T11:39:27.200040Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.101', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 13 11:39:27.200208 waagent[2124]: 2026-03-13T11:39:27.200185Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 13 11:39:27.200649 waagent[2124]: 2026-03-13T11:39:27.200617Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 13 11:39:27.247342 waagent[2124]: 2026-03-13T11:39:27.247299Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 13 11:39:27.247537 waagent[2124]: 2026-03-13T11:39:27.247509Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 13 11:39:27.252374 waagent[2124]: 2026-03-13T11:39:27.252326Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 13 11:39:27.257601 systemd[1]: Reload requested from client PID 2139 ('systemctl') (unit waagent.service)... Mar 13 11:39:27.257623 systemd[1]: Reloading... Mar 13 11:39:27.338674 zram_generator::config[2196]: No configuration found. Mar 13 11:39:27.482321 systemd[1]: Reloading finished in 224 ms. Mar 13 11:39:27.509984 waagent[2124]: 2026-03-13T11:39:27.509289Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 13 11:39:27.509984 waagent[2124]: 2026-03-13T11:39:27.509433Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 13 11:39:28.272711 waagent[2124]: 2026-03-13T11:39:28.271843Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 13 11:39:28.272711 waagent[2124]: 2026-03-13T11:39:28.272218Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 13 11:39:28.273057 waagent[2124]: 2026-03-13T11:39:28.272935Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 11:39:28.273057 waagent[2124]: 2026-03-13T11:39:28.273010Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 11:39:28.273204 waagent[2124]: 2026-03-13T11:39:28.273168Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 13 11:39:28.273308 waagent[2124]: 2026-03-13T11:39:28.273261Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 13 11:39:28.273410 waagent[2124]: 2026-03-13T11:39:28.273380Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 13 11:39:28.273410 waagent[2124]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 13 11:39:28.273410 waagent[2124]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 13 11:39:28.273410 waagent[2124]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 13 11:39:28.273410 waagent[2124]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 13 11:39:28.273410 waagent[2124]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 11:39:28.273410 waagent[2124]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 11:39:28.273938 waagent[2124]: 2026-03-13T11:39:28.273903Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 13 11:39:28.274107 waagent[2124]: 2026-03-13T11:39:28.274081Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 11:39:28.274436 waagent[2124]: 2026-03-13T11:39:28.274392Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 13 11:39:28.274584 waagent[2124]: 2026-03-13T11:39:28.274546Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 11:39:28.274661 waagent[2124]: 2026-03-13T11:39:28.274633Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 13 11:39:28.274971 waagent[2124]: 2026-03-13T11:39:28.274934Z INFO EnvHandler ExtHandler Configure routes Mar 13 11:39:28.275303 waagent[2124]: 2026-03-13T11:39:28.275261Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 13 11:39:28.275432 waagent[2124]: 2026-03-13T11:39:28.275396Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 13 11:39:28.275503 waagent[2124]: 2026-03-13T11:39:28.275476Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 13 11:39:28.276183 waagent[2124]: 2026-03-13T11:39:28.276147Z INFO EnvHandler ExtHandler Gateway:None Mar 13 11:39:28.276392 waagent[2124]: 2026-03-13T11:39:28.276361Z INFO EnvHandler ExtHandler Routes:None Mar 13 11:39:28.282174 waagent[2124]: 2026-03-13T11:39:28.282129Z INFO ExtHandler ExtHandler Mar 13 11:39:28.282348 waagent[2124]: 2026-03-13T11:39:28.282317Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4de89f11-329c-419a-840d-ad32c86ab556 correlation 6d91c654-211a-4aca-bd66-d087061fce86 created: 2026-03-13T11:38:24.901276Z] Mar 13 11:39:28.282802 waagent[2124]: 2026-03-13T11:39:28.282755Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 13 11:39:28.283368 waagent[2124]: 2026-03-13T11:39:28.283328Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 13 11:39:28.310541 waagent[2124]: 2026-03-13T11:39:28.310495Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Mar 13 11:39:28.310541 waagent[2124]: Try `iptables -h' or 'iptables --help' for more information.) Mar 13 11:39:28.311498 waagent[2124]: 2026-03-13T11:39:28.311402Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E1D1A69F-E80B-4165-B207-D1091FC4211A;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Mar 13 11:39:28.378381 waagent[2124]: 2026-03-13T11:39:28.378301Z INFO MonitorHandler ExtHandler Network interfaces: Mar 13 11:39:28.378381 waagent[2124]: Executing ['ip', '-a', '-o', 'link']: Mar 13 11:39:28.378381 waagent[2124]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 13 11:39:28.378381 waagent[2124]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:24:1f brd ff:ff:ff:ff:ff:ff Mar 13 11:39:28.378381 waagent[2124]: 3: enP61291s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:24:1f brd ff:ff:ff:ff:ff:ff\ altname enP61291p0s2 Mar 13 11:39:28.378381 waagent[2124]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 13 11:39:28.378381 waagent[2124]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 13 11:39:28.378381 waagent[2124]: 2: eth0 inet 10.200.20.31/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 13 11:39:28.378381 waagent[2124]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 13 11:39:28.378381 waagent[2124]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 13 11:39:28.378381 waagent[2124]: 2: eth0 inet6 fe80::20d:3aff:fe6d:241f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 13 11:39:28.455434 waagent[2124]: 2026-03-13T11:39:28.455353Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 13 11:39:28.455434 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 11:39:28.455434 waagent[2124]: pkts bytes target prot opt in out source destination Mar 13 11:39:28.455434 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 11:39:28.455434 waagent[2124]: pkts bytes target prot opt in out source destination Mar 13 11:39:28.455434 waagent[2124]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 11:39:28.455434 waagent[2124]: pkts bytes target prot opt in out source destination Mar 13 11:39:28.455434 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 11:39:28.455434 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 11:39:28.455434 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 11:39:28.458505 waagent[2124]: 2026-03-13T11:39:28.458463Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 13 11:39:28.458505 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 11:39:28.458505 waagent[2124]: pkts bytes target prot opt in out source destination Mar 13 11:39:28.458505 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 11:39:28.458505 waagent[2124]: pkts bytes target prot opt in out source destination Mar 13 11:39:28.458505 waagent[2124]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Mar 13 11:39:28.458505 waagent[2124]: pkts bytes target prot opt in out source destination Mar 13 11:39:28.458505 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 11:39:28.458505 waagent[2124]: 4 595 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 11:39:28.458505 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 11:39:28.458975 waagent[2124]: 2026-03-13T11:39:28.458947Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 13 11:39:33.776712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 11:39:33.778010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:39:33.889678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:39:33.897395 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 11:39:33.980985 kubelet[2273]: E0313 11:39:33.980931 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 11:39:33.983888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 11:39:33.984004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 11:39:33.985967 systemd[1]: kubelet.service: Consumed 117ms CPU time, 105.4M memory peak. Mar 13 11:39:41.239531 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 11:39:41.240554 systemd[1]: Started sshd@0-10.200.20.31:22-10.200.16.10:41158.service - OpenSSH per-connection server daemon (10.200.16.10:41158). Mar 13 11:39:41.844054 sshd[2281]: Accepted publickey for core from 10.200.16.10 port 41158 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:39:41.845249 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:41.849090 systemd-logind[1879]: New session 3 of user core. Mar 13 11:39:41.856016 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 11:39:42.165207 systemd[1]: Started sshd@1-10.200.20.31:22-10.200.16.10:41174.service - OpenSSH per-connection server daemon (10.200.16.10:41174). Mar 13 11:39:42.579346 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 41174 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:39:42.580498 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:42.584189 systemd-logind[1879]: New session 4 of user core. Mar 13 11:39:42.594035 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 11:39:42.812112 sshd[2290]: Connection closed by 10.200.16.10 port 41174 Mar 13 11:39:42.812705 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Mar 13 11:39:42.816054 systemd[1]: sshd@1-10.200.20.31:22-10.200.16.10:41174.service: Deactivated successfully. Mar 13 11:39:42.817522 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 11:39:42.818131 systemd-logind[1879]: Session 4 logged out. Waiting for processes to exit. Mar 13 11:39:42.819603 systemd-logind[1879]: Removed session 4. Mar 13 11:39:42.901844 systemd[1]: Started sshd@2-10.200.20.31:22-10.200.16.10:41186.service - OpenSSH per-connection server daemon (10.200.16.10:41186). Mar 13 11:39:43.327909 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 41186 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:39:43.329017 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:43.332511 systemd-logind[1879]: New session 5 of user core. Mar 13 11:39:43.339995 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 11:39:43.561047 sshd[2299]: Connection closed by 10.200.16.10 port 41186 Mar 13 11:39:43.560949 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Mar 13 11:39:43.565359 systemd[1]: sshd@2-10.200.20.31:22-10.200.16.10:41186.service: Deactivated successfully. Mar 13 11:39:43.567219 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 11:39:43.568807 systemd-logind[1879]: Session 5 logged out. Waiting for processes to exit. Mar 13 11:39:43.569713 systemd-logind[1879]: Removed session 5. Mar 13 11:39:43.647691 systemd[1]: Started sshd@3-10.200.20.31:22-10.200.16.10:41196.service - OpenSSH per-connection server daemon (10.200.16.10:41196). Mar 13 11:39:44.025062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 11:39:44.026548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:39:44.066915 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 41196 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:39:44.068059 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:44.074934 systemd-logind[1879]: New session 6 of user core. Mar 13 11:39:44.078013 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 11:39:44.147764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:39:44.150672 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 11:39:44.281525 kubelet[2316]: E0313 11:39:44.281378 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 11:39:44.284002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 11:39:44.284216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 11:39:44.284822 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.6M memory peak. Mar 13 11:39:44.300249 sshd[2311]: Connection closed by 10.200.16.10 port 41196 Mar 13 11:39:44.301157 sshd-session[2305]: pam_unix(sshd:session): session closed for user core Mar 13 11:39:44.304410 systemd[1]: sshd@3-10.200.20.31:22-10.200.16.10:41196.service: Deactivated successfully. Mar 13 11:39:44.306374 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 11:39:44.307705 systemd-logind[1879]: Session 6 logged out. Waiting for processes to exit. Mar 13 11:39:44.309847 systemd-logind[1879]: Removed session 6. Mar 13 11:39:44.390342 systemd[1]: Started sshd@4-10.200.20.31:22-10.200.16.10:41204.service - OpenSSH per-connection server daemon (10.200.16.10:41204). Mar 13 11:39:44.776909 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 41204 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:39:44.778151 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:44.781842 systemd-logind[1879]: New session 7 of user core. Mar 13 11:39:44.788028 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 11:39:45.109845 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 11:39:45.110131 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 11:39:45.140507 sudo[2334]: pam_unix(sudo:session): session closed for user root Mar 13 11:39:45.211909 sshd[2333]: Connection closed by 10.200.16.10 port 41204 Mar 13 11:39:45.211817 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Mar 13 11:39:45.215972 systemd[1]: sshd@4-10.200.20.31:22-10.200.16.10:41204.service: Deactivated successfully. Mar 13 11:39:45.217574 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 11:39:45.219417 systemd-logind[1879]: Session 7 logged out. Waiting for processes to exit. Mar 13 11:39:45.220531 systemd-logind[1879]: Removed session 7. Mar 13 11:39:45.302938 systemd[1]: Started sshd@5-10.200.20.31:22-10.200.16.10:41216.service - OpenSSH per-connection server daemon (10.200.16.10:41216). Mar 13 11:39:45.723395 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 41216 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:39:45.724207 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:45.727636 systemd-logind[1879]: New session 8 of user core. Mar 13 11:39:45.738032 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 11:39:45.881176 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 11:39:45.881388 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 11:39:45.889104 sudo[2345]: pam_unix(sudo:session): session closed for user root Mar 13 11:39:45.893395 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 11:39:45.893999 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 11:39:45.901993 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 11:39:45.933973 augenrules[2367]: No rules Mar 13 11:39:45.935370 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 11:39:45.935721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 11:39:45.937153 sudo[2344]: pam_unix(sudo:session): session closed for user root Mar 13 11:39:46.014963 sshd[2343]: Connection closed by 10.200.16.10 port 41216 Mar 13 11:39:46.015426 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Mar 13 11:39:46.019061 systemd-logind[1879]: Session 8 logged out. Waiting for processes to exit. Mar 13 11:39:46.019964 systemd[1]: sshd@5-10.200.20.31:22-10.200.16.10:41216.service: Deactivated successfully. Mar 13 11:39:46.021757 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 11:39:46.023703 systemd-logind[1879]: Removed session 8. Mar 13 11:39:46.103704 systemd[1]: Started sshd@6-10.200.20.31:22-10.200.16.10:41230.service - OpenSSH per-connection server daemon (10.200.16.10:41230). Mar 13 11:39:46.249978 chronyd[1856]: Selected source PHC0 Mar 13 11:39:46.521517 sshd[2376]: Accepted publickey for core from 10.200.16.10 port 41230 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:39:46.522256 sshd-session[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:39:46.525897 systemd-logind[1879]: New session 9 of user core. Mar 13 11:39:46.533045 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 11:39:46.679635 sudo[2380]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 11:39:46.680272 sudo[2380]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 11:39:48.195716 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 11:39:48.206198 (dockerd)[2397]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 11:39:49.348882 dockerd[2397]: time="2026-03-13T11:39:49.346624543Z" level=info msg="Starting up" Mar 13 11:39:49.350143 dockerd[2397]: time="2026-03-13T11:39:49.349741867Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 11:39:49.358955 dockerd[2397]: time="2026-03-13T11:39:49.358916368Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 11:39:49.413777 dockerd[2397]: time="2026-03-13T11:39:49.413728591Z" level=info msg="Loading containers: start." Mar 13 11:39:49.442893 kernel: Initializing XFRM netlink socket Mar 13 11:39:49.859388 systemd-networkd[1489]: docker0: Link UP Mar 13 11:39:49.875713 dockerd[2397]: time="2026-03-13T11:39:49.875611696Z" level=info msg="Loading containers: done." Mar 13 11:39:49.886511 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4027089676-merged.mount: Deactivated successfully. Mar 13 11:39:49.895366 dockerd[2397]: time="2026-03-13T11:39:49.895321065Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 11:39:49.895471 dockerd[2397]: time="2026-03-13T11:39:49.895420949Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 11:39:49.895530 dockerd[2397]: time="2026-03-13T11:39:49.895513776Z" level=info msg="Initializing buildkit" Mar 13 11:39:49.941548 dockerd[2397]: time="2026-03-13T11:39:49.941499942Z" level=info msg="Completed buildkit initialization" Mar 13 11:39:49.947070 dockerd[2397]: time="2026-03-13T11:39:49.947013108Z" level=info msg="Daemon has completed initialization" Mar 13 11:39:49.948819 dockerd[2397]: time="2026-03-13T11:39:49.947354232Z" level=info msg="API listen on /run/docker.sock" Mar 13 11:39:49.947512 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 11:39:50.353275 containerd[1906]: time="2026-03-13T11:39:50.353235553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 11:39:51.066796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571466130.mount: Deactivated successfully. Mar 13 11:39:52.451123 containerd[1906]: time="2026-03-13T11:39:52.451069663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:52.453996 containerd[1906]: time="2026-03-13T11:39:52.453959099Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 13 11:39:52.457269 containerd[1906]: time="2026-03-13T11:39:52.457223836Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:52.461689 containerd[1906]: time="2026-03-13T11:39:52.461643844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:52.462702 containerd[1906]: time="2026-03-13T11:39:52.462661560Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.109386853s" Mar 13 11:39:52.462842 containerd[1906]: time="2026-03-13T11:39:52.462773979Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 13 11:39:52.463558 containerd[1906]: time="2026-03-13T11:39:52.463480428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 11:39:53.819309 containerd[1906]: time="2026-03-13T11:39:53.819246859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:53.822231 containerd[1906]: time="2026-03-13T11:39:53.822200762Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 13 11:39:53.825185 containerd[1906]: time="2026-03-13T11:39:53.825153848Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:53.830038 containerd[1906]: time="2026-03-13T11:39:53.829971974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:53.830503 containerd[1906]: time="2026-03-13T11:39:53.830348099Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.36684211s" Mar 13 11:39:53.830503 containerd[1906]: time="2026-03-13T11:39:53.830378124Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 13 11:39:53.830931 containerd[1906]: time="2026-03-13T11:39:53.830784682Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 11:39:54.525037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 11:39:54.526694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:39:54.643184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:39:54.651205 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 11:39:54.733582 kubelet[2675]: E0313 11:39:54.733524 2675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 11:39:54.735693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 11:39:54.735801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 11:39:54.736363 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.3M memory peak. Mar 13 11:39:55.321138 containerd[1906]: time="2026-03-13T11:39:55.321069969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:55.324284 containerd[1906]: time="2026-03-13T11:39:55.324082815Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 13 11:39:55.327386 containerd[1906]: time="2026-03-13T11:39:55.327359752Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:55.333571 containerd[1906]: time="2026-03-13T11:39:55.333523560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:55.334053 containerd[1906]: time="2026-03-13T11:39:55.333937691Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.503123415s" Mar 13 11:39:55.334053 containerd[1906]: time="2026-03-13T11:39:55.333969988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 13 11:39:55.334544 containerd[1906]: time="2026-03-13T11:39:55.334516748Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 11:39:56.869567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924891266.mount: Deactivated successfully. Mar 13 11:39:57.062778 containerd[1906]: time="2026-03-13T11:39:57.062713400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:57.066284 containerd[1906]: time="2026-03-13T11:39:57.066252228Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 13 11:39:57.069248 containerd[1906]: time="2026-03-13T11:39:57.069219576Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:57.073606 containerd[1906]: time="2026-03-13T11:39:57.073573216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:57.074144 containerd[1906]: time="2026-03-13T11:39:57.073940305Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.739388371s" Mar 13 11:39:57.074144 containerd[1906]: time="2026-03-13T11:39:57.073971602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 13 11:39:57.074613 containerd[1906]: time="2026-03-13T11:39:57.074592805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 11:39:57.685852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812295824.mount: Deactivated successfully. Mar 13 11:39:58.725718 containerd[1906]: time="2026-03-13T11:39:58.725648917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:58.730944 containerd[1906]: time="2026-03-13T11:39:58.730906486Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 13 11:39:58.734081 containerd[1906]: time="2026-03-13T11:39:58.734029504Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:58.740812 containerd[1906]: time="2026-03-13T11:39:58.740771914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:58.741475 containerd[1906]: time="2026-03-13T11:39:58.741446912Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.666830322s" Mar 13 11:39:58.741504 containerd[1906]: time="2026-03-13T11:39:58.741479225Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 13 11:39:58.742203 containerd[1906]: time="2026-03-13T11:39:58.742176960Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 11:39:59.385551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683720159.mount: Deactivated successfully. Mar 13 11:39:59.409352 containerd[1906]: time="2026-03-13T11:39:59.408809181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:59.411830 containerd[1906]: time="2026-03-13T11:39:59.411802074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 13 11:39:59.414984 containerd[1906]: time="2026-03-13T11:39:59.414957165Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:59.419961 containerd[1906]: time="2026-03-13T11:39:59.419932033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:39:59.420277 containerd[1906]: time="2026-03-13T11:39:59.420246823Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 678.039182ms" Mar 13 11:39:59.420277 containerd[1906]: time="2026-03-13T11:39:59.420277216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 13 11:39:59.420819 containerd[1906]: time="2026-03-13T11:39:59.420772030Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 11:40:00.112678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582958643.mount: Deactivated successfully. Mar 13 11:40:01.271095 containerd[1906]: time="2026-03-13T11:40:01.271044672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:40:01.274230 containerd[1906]: time="2026-03-13T11:40:01.274194780Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 13 11:40:01.277147 containerd[1906]: time="2026-03-13T11:40:01.277118237Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:40:01.282383 containerd[1906]: time="2026-03-13T11:40:01.282343860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:40:01.283067 containerd[1906]: time="2026-03-13T11:40:01.282936542Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.862137567s" Mar 13 11:40:01.283067 containerd[1906]: time="2026-03-13T11:40:01.282973800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 13 11:40:04.360723 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:40:04.361215 systemd[1]: kubelet.service: Consumed 112ms CPU time, 106.3M memory peak. Mar 13 11:40:04.363515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:40:04.386317 systemd[1]: Reload requested from client PID 2837 ('systemctl') (unit session-9.scope)... Mar 13 11:40:04.386338 systemd[1]: Reloading... Mar 13 11:40:04.496925 zram_generator::config[2882]: No configuration found. Mar 13 11:40:04.655696 systemd[1]: Reloading finished in 269 ms. Mar 13 11:40:04.688313 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 11:40:04.688373 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 11:40:04.688590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:40:04.688631 systemd[1]: kubelet.service: Consumed 69ms CPU time, 94.9M memory peak. Mar 13 11:40:04.689784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:40:04.898267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:40:04.906119 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 11:40:04.932539 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 11:40:04.932539 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 11:40:05.063241 kubelet[2949]: I0313 11:40:05.063157 2949 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 11:40:05.360575 kubelet[2949]: I0313 11:40:05.360532 2949 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 11:40:05.360575 kubelet[2949]: I0313 11:40:05.360566 2949 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 11:40:05.360575 kubelet[2949]: I0313 11:40:05.360586 2949 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 11:40:05.360575 kubelet[2949]: I0313 11:40:05.360591 2949 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 11:40:05.360894 kubelet[2949]: I0313 11:40:05.360866 2949 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 11:40:05.531778 kubelet[2949]: E0313 11:40:05.531740 2949 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 11:40:05.532548 kubelet[2949]: I0313 11:40:05.532494 2949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 11:40:05.535904 kubelet[2949]: I0313 11:40:05.535833 2949 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 11:40:05.538380 kubelet[2949]: I0313 11:40:05.538360 2949 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 11:40:05.538540 kubelet[2949]: I0313 11:40:05.538518 2949 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 11:40:05.538653 kubelet[2949]: I0313 11:40:05.538538 2949 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.101-83511db97f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 11:40:05.538653 kubelet[2949]: I0313 11:40:05.538650 2949 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 11:40:05.538653 kubelet[2949]: I0313 11:40:05.538656 2949 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 11:40:05.538807 kubelet[2949]: I0313 11:40:05.538755 2949 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 11:40:05.543732 kubelet[2949]: I0313 11:40:05.543714 2949 state_mem.go:36] "Initialized new in-memory state store" Mar 13 11:40:05.544763 kubelet[2949]: I0313 11:40:05.544744 2949 kubelet.go:475] "Attempting to sync node with API server" Mar 13 11:40:05.544763 kubelet[2949]: I0313 11:40:05.544762 2949 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 11:40:05.545794 kubelet[2949]: E0313 11:40:05.545283 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.101-83511db97f&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 11:40:05.545794 kubelet[2949]: I0313 11:40:05.545294 2949 kubelet.go:387] "Adding apiserver pod source" Mar 13 11:40:05.545794 kubelet[2949]: I0313 11:40:05.545316 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 11:40:05.546203 kubelet[2949]: I0313 11:40:05.546189 2949 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 11:40:05.546685 kubelet[2949]: I0313 11:40:05.546668 2949 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 11:40:05.546763 kubelet[2949]: I0313 11:40:05.546756 2949 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 11:40:05.546835 kubelet[2949]: W0313 11:40:05.546828 2949 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 11:40:05.549214 kubelet[2949]: I0313 11:40:05.549198 2949 server.go:1262] "Started kubelet" Mar 13 11:40:05.549416 kubelet[2949]: E0313 11:40:05.549399 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 11:40:05.550596 kubelet[2949]: I0313 11:40:05.550569 2949 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 11:40:05.551159 kubelet[2949]: I0313 11:40:05.551136 2949 server.go:310] "Adding debug handlers to kubelet server" Mar 13 11:40:05.551984 kubelet[2949]: I0313 11:40:05.551744 2949 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 11:40:05.551984 kubelet[2949]: I0313 11:40:05.551801 2949 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 11:40:05.554985 kubelet[2949]: I0313 11:40:05.554945 2949 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 11:40:05.557616 kubelet[2949]: I0313 11:40:05.556319 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 11:40:05.557616 kubelet[2949]: E0313 11:40:05.554612 2949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.31:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.101-83511db97f.189c63ba68b92e24 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.101-83511db97f,UID:ci-4459.2.101-83511db97f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.101-83511db97f,},FirstTimestamp:2026-03-13 11:40:05.549166116 +0000 UTC m=+0.640421564,LastTimestamp:2026-03-13 11:40:05.549166116 +0000 UTC m=+0.640421564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.101-83511db97f,}" Mar 13 11:40:05.557981 kubelet[2949]: I0313 11:40:05.557951 2949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 11:40:05.560264 kubelet[2949]: E0313 11:40:05.560249 2949 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.101-83511db97f\" not found" Mar 13 11:40:05.560402 kubelet[2949]: I0313 11:40:05.560393 2949 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 11:40:05.561720 kubelet[2949]: E0313 11:40:05.561695 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.101-83511db97f?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="200ms" Mar 13 11:40:05.564023 kubelet[2949]: I0313 11:40:05.562157 2949 factory.go:223] Registration of the systemd container factory successfully Mar 13 11:40:05.564375 kubelet[2949]: I0313 11:40:05.564334 2949 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 11:40:05.564536 kubelet[2949]: I0313 11:40:05.562490 2949 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 11:40:05.564595 kubelet[2949]: E0313 11:40:05.563645 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 11:40:05.564638 kubelet[2949]: I0313 11:40:05.562571 2949 reconciler.go:29] "Reconciler: start to sync state" Mar 13 11:40:05.565434 kubelet[2949]: E0313 11:40:05.565416 2949 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 11:40:05.565781 kubelet[2949]: I0313 11:40:05.565765 2949 factory.go:223] Registration of the containerd container factory successfully Mar 13 11:40:05.588308 kubelet[2949]: I0313 11:40:05.588288 2949 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 11:40:05.588577 kubelet[2949]: I0313 11:40:05.588563 2949 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 11:40:05.588886 kubelet[2949]: I0313 11:40:05.588751 2949 state_mem.go:36] "Initialized new in-memory state store" Mar 13 11:40:05.593222 kubelet[2949]: I0313 11:40:05.593082 2949 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 11:40:05.594585 kubelet[2949]: I0313 11:40:05.594452 2949 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 11:40:05.594585 kubelet[2949]: I0313 11:40:05.594477 2949 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 11:40:05.594585 kubelet[2949]: I0313 11:40:05.594496 2949 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 11:40:05.594883 kubelet[2949]: E0313 11:40:05.594687 2949 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 11:40:05.595541 kubelet[2949]: E0313 11:40:05.595507 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 11:40:05.596012 kubelet[2949]: I0313 11:40:05.596000 2949 policy_none.go:49] "None policy: Start" Mar 13 11:40:05.596464 kubelet[2949]: I0313 11:40:05.596451 2949 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 11:40:05.596550 kubelet[2949]: I0313 11:40:05.596541 2949 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 11:40:05.601396 kubelet[2949]: I0313 11:40:05.601380 2949 policy_none.go:47] "Start" Mar 13 11:40:05.605346 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 11:40:05.612810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 11:40:05.625051 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 11:40:05.626235 kubelet[2949]: E0313 11:40:05.626214 2949 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 11:40:05.626414 kubelet[2949]: I0313 11:40:05.626395 2949 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 11:40:05.626457 kubelet[2949]: I0313 11:40:05.626409 2949 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 11:40:05.627627 kubelet[2949]: I0313 11:40:05.627299 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 11:40:05.628385 kubelet[2949]: E0313 11:40:05.628358 2949 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 11:40:05.628437 kubelet[2949]: E0313 11:40:05.628409 2949 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.101-83511db97f\" not found" Mar 13 11:40:05.708022 systemd[1]: Created slice kubepods-burstable-pod475041211d03580cadac77a9c8b2b7eb.slice - libcontainer container kubepods-burstable-pod475041211d03580cadac77a9c8b2b7eb.slice. Mar 13 11:40:05.716268 kubelet[2949]: E0313 11:40:05.715681 2949 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.101-83511db97f\" not found" node="ci-4459.2.101-83511db97f" Mar 13 11:40:05.719824 systemd[1]: Created slice kubepods-burstable-pod39c194385d367edc58fa7e5134a64b26.slice - libcontainer container kubepods-burstable-pod39c194385d367edc58fa7e5134a64b26.slice. Mar 13 11:40:05.722427 kubelet[2949]: E0313 11:40:05.722390 2949 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.101-83511db97f\" not found" node="ci-4459.2.101-83511db97f" Mar 13 11:40:05.724225 systemd[1]: Created slice kubepods-burstable-pod10759efb822f31c4e21cf56eda0215e3.slice - libcontainer container kubepods-burstable-pod10759efb822f31c4e21cf56eda0215e3.slice. Mar 13 11:40:05.725571 kubelet[2949]: E0313 11:40:05.725547 2949 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.101-83511db97f\" not found" node="ci-4459.2.101-83511db97f" Mar 13 11:40:05.727521 kubelet[2949]: I0313 11:40:05.727495 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:05.727849 kubelet[2949]: E0313 11:40:05.727828 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4459.2.101-83511db97f" Mar 13 11:40:05.764606 kubelet[2949]: E0313 11:40:05.764572 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.101-83511db97f?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="400ms" Mar 13 11:40:05.866369 kubelet[2949]: I0313 11:40:05.866245 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/475041211d03580cadac77a9c8b2b7eb-ca-certs\") pod \"kube-apiserver-ci-4459.2.101-83511db97f\" (UID: \"475041211d03580cadac77a9c8b2b7eb\") " pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866369 kubelet[2949]: I0313 11:40:05.866287 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/475041211d03580cadac77a9c8b2b7eb-k8s-certs\") pod \"kube-apiserver-ci-4459.2.101-83511db97f\" (UID: \"475041211d03580cadac77a9c8b2b7eb\") " pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866369 kubelet[2949]: I0313 11:40:05.866301 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/475041211d03580cadac77a9c8b2b7eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.101-83511db97f\" (UID: \"475041211d03580cadac77a9c8b2b7eb\") " pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866369 kubelet[2949]: I0313 11:40:05.866319 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-ca-certs\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866369 kubelet[2949]: I0313 11:40:05.866329 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866551 kubelet[2949]: I0313 11:40:05.866338 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866551 kubelet[2949]: I0313 11:40:05.866351 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866791 kubelet[2949]: I0313 11:40:05.866766 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10759efb822f31c4e21cf56eda0215e3-kubeconfig\") pod \"kube-scheduler-ci-4459.2.101-83511db97f\" (UID: \"10759efb822f31c4e21cf56eda0215e3\") " pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:05.866791 kubelet[2949]: I0313 11:40:05.866786 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:05.902506 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 13 11:40:05.930357 kubelet[2949]: I0313 11:40:05.930329 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:05.930676 kubelet[2949]: E0313 11:40:05.930650 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4459.2.101-83511db97f" Mar 13 11:40:06.171161 kubelet[2949]: E0313 11:40:06.165256 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.101-83511db97f?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="800ms" Mar 13 11:40:06.173859 containerd[1906]: time="2026-03-13T11:40:06.173676880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.101-83511db97f,Uid:475041211d03580cadac77a9c8b2b7eb,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:06.180855 containerd[1906]: time="2026-03-13T11:40:06.180678492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.101-83511db97f,Uid:39c194385d367edc58fa7e5134a64b26,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:06.189022 containerd[1906]: time="2026-03-13T11:40:06.188851405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.101-83511db97f,Uid:10759efb822f31c4e21cf56eda0215e3,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:06.333175 kubelet[2949]: I0313 11:40:06.333123 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:06.333504 kubelet[2949]: E0313 11:40:06.333473 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4459.2.101-83511db97f" Mar 13 11:40:06.588233 kubelet[2949]: E0313 11:40:06.588189 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 11:40:06.621240 kubelet[2949]: E0313 11:40:06.621200 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.101-83511db97f&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 11:40:06.751116 kubelet[2949]: E0313 11:40:06.751066 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 11:40:06.780993 kubelet[2949]: E0313 11:40:06.780959 2949 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 11:40:06.965800 kubelet[2949]: E0313 11:40:06.965685 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.101-83511db97f?timeout=10s\": dial tcp 10.200.20.31:6443: connect: connection refused" interval="1.6s" Mar 13 11:40:07.135350 kubelet[2949]: I0313 11:40:07.135286 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:07.136097 kubelet[2949]: E0313 11:40:07.136073 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-4459.2.101-83511db97f" Mar 13 11:40:07.404185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424639448.mount: Deactivated successfully. Mar 13 11:40:07.426918 containerd[1906]: time="2026-03-13T11:40:07.426613562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 11:40:07.436940 containerd[1906]: time="2026-03-13T11:40:07.436892918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 13 11:40:07.439762 containerd[1906]: time="2026-03-13T11:40:07.439731686Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 11:40:07.443444 containerd[1906]: time="2026-03-13T11:40:07.442959901Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 11:40:07.448684 containerd[1906]: time="2026-03-13T11:40:07.448655916Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 11:40:07.451811 containerd[1906]: time="2026-03-13T11:40:07.451778567Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 11:40:07.455183 containerd[1906]: time="2026-03-13T11:40:07.455155044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 11:40:07.455733 containerd[1906]: time="2026-03-13T11:40:07.455707106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.275639303s" Mar 13 11:40:07.460079 containerd[1906]: time="2026-03-13T11:40:07.460051397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 11:40:07.464761 containerd[1906]: time="2026-03-13T11:40:07.464716580Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.276412109s" Mar 13 11:40:07.506306 containerd[1906]: time="2026-03-13T11:40:07.506022756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.310339819s" Mar 13 11:40:07.526251 containerd[1906]: time="2026-03-13T11:40:07.526201077Z" level=info msg="connecting to shim 22cd70c640e0df3997d6d3f75701eafda080b32b0dde4cac384e73376bd50231" address="unix:///run/containerd/s/9722a3c462dd2770ecf42ba9566dc47be163b8ee58c681af2cd08ed2361c0ec3" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:07.526623 containerd[1906]: time="2026-03-13T11:40:07.526451159Z" level=info msg="connecting to shim 2d65f14063f83d58307b1dc5170a413efd4873f6eab8ccfa368966c5c85a0f30" address="unix:///run/containerd/s/aad71656b395155b44a24aede5bb2f50e0aeb00a1c48392a0af8df06bb0c760d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:07.548090 systemd[1]: Started cri-containerd-2d65f14063f83d58307b1dc5170a413efd4873f6eab8ccfa368966c5c85a0f30.scope - libcontainer container 2d65f14063f83d58307b1dc5170a413efd4873f6eab8ccfa368966c5c85a0f30. Mar 13 11:40:07.552285 systemd[1]: Started cri-containerd-22cd70c640e0df3997d6d3f75701eafda080b32b0dde4cac384e73376bd50231.scope - libcontainer container 22cd70c640e0df3997d6d3f75701eafda080b32b0dde4cac384e73376bd50231. Mar 13 11:40:07.588543 containerd[1906]: time="2026-03-13T11:40:07.588370010Z" level=info msg="connecting to shim 09c913c7eca1a2ed988241a4900b394e3406a5b83ab180331d2dc0cd904563a8" address="unix:///run/containerd/s/4b2ee6e01854af6f1e7bf80c8e7a6565480a42ddfe117bb825b378153b58ff77" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:07.597802 containerd[1906]: time="2026-03-13T11:40:07.597628574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.101-83511db97f,Uid:39c194385d367edc58fa7e5134a64b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d65f14063f83d58307b1dc5170a413efd4873f6eab8ccfa368966c5c85a0f30\"" Mar 13 11:40:07.601597 containerd[1906]: time="2026-03-13T11:40:07.601301030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.101-83511db97f,Uid:475041211d03580cadac77a9c8b2b7eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"22cd70c640e0df3997d6d3f75701eafda080b32b0dde4cac384e73376bd50231\"" Mar 13 11:40:07.607740 containerd[1906]: time="2026-03-13T11:40:07.607711242Z" level=info msg="CreateContainer within sandbox \"2d65f14063f83d58307b1dc5170a413efd4873f6eab8ccfa368966c5c85a0f30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 11:40:07.614619 containerd[1906]: time="2026-03-13T11:40:07.614571648Z" level=info msg="CreateContainer within sandbox \"22cd70c640e0df3997d6d3f75701eafda080b32b0dde4cac384e73376bd50231\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 11:40:07.616090 systemd[1]: Started cri-containerd-09c913c7eca1a2ed988241a4900b394e3406a5b83ab180331d2dc0cd904563a8.scope - libcontainer container 09c913c7eca1a2ed988241a4900b394e3406a5b83ab180331d2dc0cd904563a8. Mar 13 11:40:07.637162 containerd[1906]: time="2026-03-13T11:40:07.637073181Z" level=info msg="Container ef2d8cd6bb6f4fc67be1f62cad80727eb535d7132ac78e31b2b498592a62c8a0: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:07.643145 kubelet[2949]: E0313 11:40:07.643075 2949 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 11:40:07.643569 containerd[1906]: time="2026-03-13T11:40:07.643539963Z" level=info msg="Container a5fb6a9b2a425839bddb89b4e93af94990ff75d855779d60632e27e59148e2a9: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:07.660995 containerd[1906]: time="2026-03-13T11:40:07.660671141Z" level=info msg="CreateContainer within sandbox \"2d65f14063f83d58307b1dc5170a413efd4873f6eab8ccfa368966c5c85a0f30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef2d8cd6bb6f4fc67be1f62cad80727eb535d7132ac78e31b2b498592a62c8a0\"" Mar 13 11:40:07.663127 containerd[1906]: time="2026-03-13T11:40:07.662119390Z" level=info msg="StartContainer for \"ef2d8cd6bb6f4fc67be1f62cad80727eb535d7132ac78e31b2b498592a62c8a0\"" Mar 13 11:40:07.663127 containerd[1906]: time="2026-03-13T11:40:07.663010345Z" level=info msg="connecting to shim ef2d8cd6bb6f4fc67be1f62cad80727eb535d7132ac78e31b2b498592a62c8a0" address="unix:///run/containerd/s/aad71656b395155b44a24aede5bb2f50e0aeb00a1c48392a0af8df06bb0c760d" protocol=ttrpc version=3 Mar 13 11:40:07.669633 containerd[1906]: time="2026-03-13T11:40:07.669563002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.101-83511db97f,Uid:10759efb822f31c4e21cf56eda0215e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"09c913c7eca1a2ed988241a4900b394e3406a5b83ab180331d2dc0cd904563a8\"" Mar 13 11:40:07.678997 containerd[1906]: time="2026-03-13T11:40:07.678959732Z" level=info msg="CreateContainer within sandbox \"09c913c7eca1a2ed988241a4900b394e3406a5b83ab180331d2dc0cd904563a8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 11:40:07.682376 containerd[1906]: time="2026-03-13T11:40:07.682303935Z" level=info msg="CreateContainer within sandbox \"22cd70c640e0df3997d6d3f75701eafda080b32b0dde4cac384e73376bd50231\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a5fb6a9b2a425839bddb89b4e93af94990ff75d855779d60632e27e59148e2a9\"" Mar 13 11:40:07.683142 containerd[1906]: time="2026-03-13T11:40:07.682845229Z" level=info msg="StartContainer for \"a5fb6a9b2a425839bddb89b4e93af94990ff75d855779d60632e27e59148e2a9\"" Mar 13 11:40:07.683023 systemd[1]: Started cri-containerd-ef2d8cd6bb6f4fc67be1f62cad80727eb535d7132ac78e31b2b498592a62c8a0.scope - libcontainer container ef2d8cd6bb6f4fc67be1f62cad80727eb535d7132ac78e31b2b498592a62c8a0. Mar 13 11:40:07.683637 containerd[1906]: time="2026-03-13T11:40:07.683598986Z" level=info msg="connecting to shim a5fb6a9b2a425839bddb89b4e93af94990ff75d855779d60632e27e59148e2a9" address="unix:///run/containerd/s/9722a3c462dd2770ecf42ba9566dc47be163b8ee58c681af2cd08ed2361c0ec3" protocol=ttrpc version=3 Mar 13 11:40:07.700487 containerd[1906]: time="2026-03-13T11:40:07.700441241Z" level=info msg="Container f13a6a1c7e281dc6ae875f08bdc456f919324bd634e88e82882df9c4555f28cc: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:07.703148 systemd[1]: Started cri-containerd-a5fb6a9b2a425839bddb89b4e93af94990ff75d855779d60632e27e59148e2a9.scope - libcontainer container a5fb6a9b2a425839bddb89b4e93af94990ff75d855779d60632e27e59148e2a9. Mar 13 11:40:07.724449 containerd[1906]: time="2026-03-13T11:40:07.724361693Z" level=info msg="CreateContainer within sandbox \"09c913c7eca1a2ed988241a4900b394e3406a5b83ab180331d2dc0cd904563a8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f13a6a1c7e281dc6ae875f08bdc456f919324bd634e88e82882df9c4555f28cc\"" Mar 13 11:40:07.724903 containerd[1906]: time="2026-03-13T11:40:07.724840880Z" level=info msg="StartContainer for \"f13a6a1c7e281dc6ae875f08bdc456f919324bd634e88e82882df9c4555f28cc\"" Mar 13 11:40:07.727299 containerd[1906]: time="2026-03-13T11:40:07.727244671Z" level=info msg="connecting to shim f13a6a1c7e281dc6ae875f08bdc456f919324bd634e88e82882df9c4555f28cc" address="unix:///run/containerd/s/4b2ee6e01854af6f1e7bf80c8e7a6565480a42ddfe117bb825b378153b58ff77" protocol=ttrpc version=3 Mar 13 11:40:07.743953 containerd[1906]: time="2026-03-13T11:40:07.743222443Z" level=info msg="StartContainer for \"ef2d8cd6bb6f4fc67be1f62cad80727eb535d7132ac78e31b2b498592a62c8a0\" returns successfully" Mar 13 11:40:07.756155 systemd[1]: Started cri-containerd-f13a6a1c7e281dc6ae875f08bdc456f919324bd634e88e82882df9c4555f28cc.scope - libcontainer container f13a6a1c7e281dc6ae875f08bdc456f919324bd634e88e82882df9c4555f28cc. Mar 13 11:40:07.766254 containerd[1906]: time="2026-03-13T11:40:07.766142992Z" level=info msg="StartContainer for \"a5fb6a9b2a425839bddb89b4e93af94990ff75d855779d60632e27e59148e2a9\" returns successfully" Mar 13 11:40:07.816733 containerd[1906]: time="2026-03-13T11:40:07.816611816Z" level=info msg="StartContainer for \"f13a6a1c7e281dc6ae875f08bdc456f919324bd634e88e82882df9c4555f28cc\" returns successfully" Mar 13 11:40:08.414896 update_engine[1884]: I20260313 11:40:08.413923 1884 update_attempter.cc:509] Updating boot flags... Mar 13 11:40:08.618562 kubelet[2949]: E0313 11:40:08.618142 2949 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.101-83511db97f\" not found" node="ci-4459.2.101-83511db97f" Mar 13 11:40:08.625893 kubelet[2949]: E0313 11:40:08.625517 2949 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.101-83511db97f\" not found" node="ci-4459.2.101-83511db97f" Mar 13 11:40:08.633896 kubelet[2949]: E0313 11:40:08.633343 2949 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.101-83511db97f\" not found" node="ci-4459.2.101-83511db97f" Mar 13 11:40:08.739178 kubelet[2949]: I0313 11:40:08.738943 2949 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:09.183224 kubelet[2949]: E0313 11:40:09.183186 2949 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.101-83511db97f\" not found" node="ci-4459.2.101-83511db97f" Mar 13 11:40:09.291132 kubelet[2949]: I0313 11:40:09.290859 2949 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:09.361635 kubelet[2949]: I0313 11:40:09.361592 2949 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:09.374930 kubelet[2949]: E0313 11:40:09.374893 2949 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.101-83511db97f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:09.374930 kubelet[2949]: I0313 11:40:09.374923 2949 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:09.379194 kubelet[2949]: E0313 11:40:09.379164 2949 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.101-83511db97f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:09.379194 kubelet[2949]: I0313 11:40:09.379189 2949 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:09.381091 kubelet[2949]: E0313 11:40:09.381070 2949 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.101-83511db97f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:09.547234 kubelet[2949]: I0313 11:40:09.547201 2949 apiserver.go:52] "Watching apiserver" Mar 13 11:40:09.564879 kubelet[2949]: I0313 11:40:09.564834 2949 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 11:40:09.632203 kubelet[2949]: I0313 11:40:09.632174 2949 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:09.632534 kubelet[2949]: I0313 11:40:09.632513 2949 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:09.634092 kubelet[2949]: I0313 11:40:09.634073 2949 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:09.635511 kubelet[2949]: E0313 11:40:09.635489 2949 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.101-83511db97f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:09.635820 kubelet[2949]: E0313 11:40:09.635799 2949 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.101-83511db97f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:09.637261 kubelet[2949]: E0313 11:40:09.637240 2949 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.101-83511db97f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:11.413044 kubelet[2949]: I0313 11:40:11.412970 2949 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:11.422382 kubelet[2949]: I0313 11:40:11.422346 2949 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 11:40:11.515790 systemd[1]: Reload requested from client PID 3293 ('systemctl') (unit session-9.scope)... Mar 13 11:40:11.515810 systemd[1]: Reloading... Mar 13 11:40:11.602086 zram_generator::config[3338]: No configuration found. Mar 13 11:40:11.780264 systemd[1]: Reloading finished in 264 ms. Mar 13 11:40:11.801981 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:40:11.814826 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 11:40:11.815294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:40:11.815437 systemd[1]: kubelet.service: Consumed 634ms CPU time, 120.7M memory peak. Mar 13 11:40:11.817642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 11:40:11.926159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 11:40:11.935176 (kubelet)[3404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 11:40:12.237565 kubelet[3404]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 11:40:12.237565 kubelet[3404]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 11:40:12.238131 kubelet[3404]: I0313 11:40:12.237292 3404 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 11:40:12.242695 kubelet[3404]: I0313 11:40:12.242663 3404 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 11:40:12.242695 kubelet[3404]: I0313 11:40:12.242689 3404 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 11:40:12.242795 kubelet[3404]: I0313 11:40:12.242711 3404 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 11:40:12.242795 kubelet[3404]: I0313 11:40:12.242716 3404 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 11:40:12.242938 kubelet[3404]: I0313 11:40:12.242921 3404 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 11:40:12.243983 kubelet[3404]: I0313 11:40:12.243885 3404 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 11:40:12.248394 kubelet[3404]: I0313 11:40:12.248350 3404 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 11:40:12.251482 kubelet[3404]: I0313 11:40:12.251462 3404 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 11:40:12.254316 kubelet[3404]: I0313 11:40:12.254040 3404 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 11:40:12.255755 kubelet[3404]: I0313 11:40:12.255724 3404 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 11:40:12.256237 kubelet[3404]: I0313 11:40:12.255839 3404 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.101-83511db97f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 11:40:12.256237 kubelet[3404]: I0313 11:40:12.256003 3404 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 11:40:12.256237 kubelet[3404]: I0313 11:40:12.256010 3404 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 11:40:12.256237 kubelet[3404]: I0313 11:40:12.256032 3404 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 11:40:12.256237 kubelet[3404]: I0313 11:40:12.256196 3404 state_mem.go:36] "Initialized new in-memory state store" Mar 13 11:40:12.256539 kubelet[3404]: I0313 11:40:12.256526 3404 kubelet.go:475] "Attempting to sync node with API server" Mar 13 11:40:12.256601 kubelet[3404]: I0313 11:40:12.256594 3404 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 11:40:12.256668 kubelet[3404]: I0313 11:40:12.256661 3404 kubelet.go:387] "Adding apiserver pod source" Mar 13 11:40:12.256725 kubelet[3404]: I0313 11:40:12.256716 3404 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 11:40:12.258986 kubelet[3404]: I0313 11:40:12.258965 3404 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 11:40:12.260727 kubelet[3404]: I0313 11:40:12.260241 3404 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 11:40:12.260727 kubelet[3404]: I0313 11:40:12.260271 3404 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 11:40:12.262616 kubelet[3404]: I0313 11:40:12.262596 3404 server.go:1262] "Started kubelet" Mar 13 11:40:12.262824 kubelet[3404]: I0313 11:40:12.262801 3404 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 11:40:12.263909 kubelet[3404]: I0313 11:40:12.263575 3404 server.go:310] "Adding debug handlers to kubelet server" Mar 13 11:40:12.266936 kubelet[3404]: I0313 11:40:12.262892 3404 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 11:40:12.266936 kubelet[3404]: I0313 11:40:12.265241 3404 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 11:40:12.267083 kubelet[3404]: I0313 11:40:12.267067 3404 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 11:40:12.267741 kubelet[3404]: I0313 11:40:12.267706 3404 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 11:40:12.275695 kubelet[3404]: I0313 11:40:12.275672 3404 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 11:40:12.276687 kubelet[3404]: I0313 11:40:12.276671 3404 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 11:40:12.277890 kubelet[3404]: E0313 11:40:12.277594 3404 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.101-83511db97f\" not found" Mar 13 11:40:12.279280 kubelet[3404]: I0313 11:40:12.279253 3404 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 11:40:12.279616 kubelet[3404]: I0313 11:40:12.279591 3404 reconciler.go:29] "Reconciler: start to sync state" Mar 13 11:40:12.285115 kubelet[3404]: I0313 11:40:12.285087 3404 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 11:40:12.292161 kubelet[3404]: I0313 11:40:12.292131 3404 factory.go:223] Registration of the containerd container factory successfully Mar 13 11:40:12.292281 kubelet[3404]: I0313 11:40:12.292273 3404 factory.go:223] Registration of the systemd container factory successfully Mar 13 11:40:12.292450 kubelet[3404]: I0313 11:40:12.292432 3404 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 11:40:12.297225 kubelet[3404]: I0313 11:40:12.297189 3404 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 11:40:12.297225 kubelet[3404]: I0313 11:40:12.297212 3404 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 11:40:12.297225 kubelet[3404]: I0313 11:40:12.297232 3404 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 11:40:12.297337 kubelet[3404]: E0313 11:40:12.297265 3404 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 11:40:12.308182 kubelet[3404]: E0313 11:40:12.306760 3404 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 11:40:12.332925 kubelet[3404]: I0313 11:40:12.332073 3404 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 11:40:12.332925 kubelet[3404]: I0313 11:40:12.332924 3404 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 11:40:12.333062 kubelet[3404]: I0313 11:40:12.332947 3404 state_mem.go:36] "Initialized new in-memory state store" Mar 13 11:40:12.333062 kubelet[3404]: I0313 11:40:12.333058 3404 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 11:40:12.333380 kubelet[3404]: I0313 11:40:12.333065 3404 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 11:40:12.333380 kubelet[3404]: I0313 11:40:12.333316 3404 policy_none.go:49] "None policy: Start" Mar 13 11:40:12.333380 kubelet[3404]: I0313 11:40:12.333326 3404 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 11:40:12.333380 kubelet[3404]: I0313 11:40:12.333337 3404 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 11:40:12.333484 kubelet[3404]: I0313 11:40:12.333439 3404 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 11:40:12.333484 kubelet[3404]: I0313 11:40:12.333444 3404 policy_none.go:47] "Start" Mar 13 11:40:12.339812 kubelet[3404]: E0313 11:40:12.339792 3404 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 11:40:12.341149 kubelet[3404]: I0313 11:40:12.341121 3404 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 11:40:12.341215 kubelet[3404]: I0313 11:40:12.341151 3404 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 11:40:12.341401 kubelet[3404]: I0313 11:40:12.341380 3404 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 11:40:12.344574 kubelet[3404]: E0313 11:40:12.344553 3404 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 11:40:12.399276 kubelet[3404]: I0313 11:40:12.398586 3404 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:12.399276 kubelet[3404]: I0313 11:40:12.398686 3404 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:12.399276 kubelet[3404]: I0313 11:40:12.398804 3404 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:12.412319 kubelet[3404]: I0313 11:40:12.412164 3404 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 11:40:12.412319 kubelet[3404]: E0313 11:40:12.412222 3404 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.101-83511db97f\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:12.413650 kubelet[3404]: I0313 11:40:12.413331 3404 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 11:40:12.413830 kubelet[3404]: I0313 11:40:12.413722 3404 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 11:40:12.446121 kubelet[3404]: I0313 11:40:12.446093 3404 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:12.461699 kubelet[3404]: I0313 11:40:12.461670 3404 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.101-83511db97f" Mar 13 11:40:12.461805 kubelet[3404]: I0313 11:40:12.461750 3404 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.101-83511db97f" Mar 13 11:40:12.580733 kubelet[3404]: I0313 11:40:12.580538 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-ca-certs\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580733 kubelet[3404]: I0313 11:40:12.580570 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580733 kubelet[3404]: I0313 11:40:12.580581 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580733 kubelet[3404]: I0313 11:40:12.580592 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580733 kubelet[3404]: I0313 11:40:12.580605 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10759efb822f31c4e21cf56eda0215e3-kubeconfig\") pod \"kube-scheduler-ci-4459.2.101-83511db97f\" (UID: \"10759efb822f31c4e21cf56eda0215e3\") " pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580961 kubelet[3404]: I0313 11:40:12.580629 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/475041211d03580cadac77a9c8b2b7eb-ca-certs\") pod \"kube-apiserver-ci-4459.2.101-83511db97f\" (UID: \"475041211d03580cadac77a9c8b2b7eb\") " pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580961 kubelet[3404]: I0313 11:40:12.580638 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/475041211d03580cadac77a9c8b2b7eb-k8s-certs\") pod \"kube-apiserver-ci-4459.2.101-83511db97f\" (UID: \"475041211d03580cadac77a9c8b2b7eb\") " pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580961 kubelet[3404]: I0313 11:40:12.580659 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/475041211d03580cadac77a9c8b2b7eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.101-83511db97f\" (UID: \"475041211d03580cadac77a9c8b2b7eb\") " pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:12.580961 kubelet[3404]: I0313 11:40:12.580674 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39c194385d367edc58fa7e5134a64b26-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.101-83511db97f\" (UID: \"39c194385d367edc58fa7e5134a64b26\") " pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:12.639616 sudo[3440]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 11:40:12.641143 sudo[3440]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 11:40:12.880474 sudo[3440]: pam_unix(sudo:session): session closed for user root Mar 13 11:40:13.258083 kubelet[3404]: I0313 11:40:13.257882 3404 apiserver.go:52] "Watching apiserver" Mar 13 11:40:13.280486 kubelet[3404]: I0313 11:40:13.280432 3404 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 11:40:13.322913 kubelet[3404]: I0313 11:40:13.322436 3404 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:13.322913 kubelet[3404]: I0313 11:40:13.322600 3404 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:13.322913 kubelet[3404]: I0313 11:40:13.322716 3404 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:13.336415 kubelet[3404]: I0313 11:40:13.336379 3404 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 11:40:13.336549 kubelet[3404]: E0313 11:40:13.336435 3404 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.101-83511db97f\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" Mar 13 11:40:13.341926 kubelet[3404]: I0313 11:40:13.341723 3404 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 11:40:13.341926 kubelet[3404]: E0313 11:40:13.341767 3404 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.101-83511db97f\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" Mar 13 11:40:13.343253 kubelet[3404]: I0313 11:40:13.343187 3404 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 11:40:13.343492 kubelet[3404]: E0313 11:40:13.343442 3404 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.101-83511db97f\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" Mar 13 11:40:13.354647 kubelet[3404]: I0313 11:40:13.354304 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.101-83511db97f" podStartSLOduration=1.354291871 podStartE2EDuration="1.354291871s" podCreationTimestamp="2026-03-13 11:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:40:13.342600207 +0000 UTC m=+1.403610304" watchObservedRunningTime="2026-03-13 11:40:13.354291871 +0000 UTC m=+1.415301968" Mar 13 11:40:13.364604 kubelet[3404]: I0313 11:40:13.364547 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.101-83511db97f" podStartSLOduration=2.364532681 podStartE2EDuration="2.364532681s" podCreationTimestamp="2026-03-13 11:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:40:13.354704182 +0000 UTC m=+1.415714279" watchObservedRunningTime="2026-03-13 11:40:13.364532681 +0000 UTC m=+1.425542818" Mar 13 11:40:13.375890 kubelet[3404]: I0313 11:40:13.375724 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.101-83511db97f" podStartSLOduration=1.375616883 podStartE2EDuration="1.375616883s" podCreationTimestamp="2026-03-13 11:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:40:13.364751225 +0000 UTC m=+1.425761370" watchObservedRunningTime="2026-03-13 11:40:13.375616883 +0000 UTC m=+1.436626988" Mar 13 11:40:13.961904 sudo[2380]: pam_unix(sudo:session): session closed for user root Mar 13 11:40:14.040342 sshd[2379]: Connection closed by 10.200.16.10 port 41230 Mar 13 11:40:14.041748 sshd-session[2376]: pam_unix(sshd:session): session closed for user core Mar 13 11:40:14.045795 systemd-logind[1879]: Session 9 logged out. Waiting for processes to exit. Mar 13 11:40:14.046164 systemd[1]: sshd@6-10.200.20.31:22-10.200.16.10:41230.service: Deactivated successfully. Mar 13 11:40:14.048615 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 11:40:14.049104 systemd[1]: session-9.scope: Consumed 4.175s CPU time, 262.7M memory peak. Mar 13 11:40:14.050501 systemd-logind[1879]: Removed session 9. Mar 13 11:40:17.412215 kubelet[3404]: I0313 11:40:17.412177 3404 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 11:40:17.412991 containerd[1906]: time="2026-03-13T11:40:17.412450119Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 11:40:17.413575 kubelet[3404]: I0313 11:40:17.412973 3404 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 11:40:18.141330 systemd[1]: Created slice kubepods-besteffort-pod05b7f5ad_9e9e_41ae_ba4b_8bf9c5326305.slice - libcontainer container kubepods-besteffort-pod05b7f5ad_9e9e_41ae_ba4b_8bf9c5326305.slice. Mar 13 11:40:18.151078 systemd[1]: Created slice kubepods-burstable-pod2af6b45d_7c43_4baa_8b96_02659e1a7ff6.slice - libcontainer container kubepods-burstable-pod2af6b45d_7c43_4baa_8b96_02659e1a7ff6.slice. Mar 13 11:40:18.214210 kubelet[3404]: I0313 11:40:18.214166 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-cgroup\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214210 kubelet[3404]: I0313 11:40:18.214198 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-etc-cni-netd\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214210 kubelet[3404]: I0313 11:40:18.214211 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-config-path\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214210 kubelet[3404]: I0313 11:40:18.214220 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hubble-tls\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214428 kubelet[3404]: I0313 11:40:18.214245 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzrq\" (UniqueName: \"kubernetes.io/projected/05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305-kube-api-access-bfzrq\") pod \"kube-proxy-dmn64\" (UID: \"05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305\") " pod="kube-system/kube-proxy-dmn64" Mar 13 11:40:18.214428 kubelet[3404]: I0313 11:40:18.214256 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-run\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214428 kubelet[3404]: I0313 11:40:18.214264 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-bpf-maps\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214428 kubelet[3404]: I0313 11:40:18.214271 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cni-path\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214428 kubelet[3404]: I0313 11:40:18.214283 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-lib-modules\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214428 kubelet[3404]: I0313 11:40:18.214291 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-clustermesh-secrets\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214526 kubelet[3404]: I0313 11:40:18.214309 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305-kube-proxy\") pod \"kube-proxy-dmn64\" (UID: \"05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305\") " pod="kube-system/kube-proxy-dmn64" Mar 13 11:40:18.214526 kubelet[3404]: I0313 11:40:18.214319 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305-lib-modules\") pod \"kube-proxy-dmn64\" (UID: \"05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305\") " pod="kube-system/kube-proxy-dmn64" Mar 13 11:40:18.214526 kubelet[3404]: I0313 11:40:18.214327 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hostproc\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214526 kubelet[3404]: I0313 11:40:18.214335 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-net\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214526 kubelet[3404]: I0313 11:40:18.214344 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-kernel\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214526 kubelet[3404]: I0313 11:40:18.214354 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305-xtables-lock\") pod \"kube-proxy-dmn64\" (UID: \"05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305\") " pod="kube-system/kube-proxy-dmn64" Mar 13 11:40:18.214612 kubelet[3404]: I0313 11:40:18.214363 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-xtables-lock\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.214612 kubelet[3404]: I0313 11:40:18.214372 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxf8q\" (UniqueName: \"kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-kube-api-access-bxf8q\") pod \"cilium-hfnc2\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " pod="kube-system/cilium-hfnc2" Mar 13 11:40:18.456976 containerd[1906]: time="2026-03-13T11:40:18.456826279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmn64,Uid:05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:18.462381 containerd[1906]: time="2026-03-13T11:40:18.462216704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hfnc2,Uid:2af6b45d-7c43-4baa-8b96-02659e1a7ff6,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:18.539149 containerd[1906]: time="2026-03-13T11:40:18.539102478Z" level=info msg="connecting to shim 4af64822351b16151102bea38dae36d6000e7016fe02266eb7f0f8c178353025" address="unix:///run/containerd/s/4c1a00c56b9039db2dc537be57066645dda4a4cd8eb5b7982f1bd6ad2e509bf6" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:18.539350 containerd[1906]: time="2026-03-13T11:40:18.539311334Z" level=info msg="connecting to shim ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96" address="unix:///run/containerd/s/53f2d37307cb50f3946d61dd3e5f59647a86b86b6f7c2684e26dc7c0a08fa655" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:18.566054 systemd[1]: Started cri-containerd-4af64822351b16151102bea38dae36d6000e7016fe02266eb7f0f8c178353025.scope - libcontainer container 4af64822351b16151102bea38dae36d6000e7016fe02266eb7f0f8c178353025. Mar 13 11:40:18.573160 systemd[1]: Started cri-containerd-ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96.scope - libcontainer container ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96. Mar 13 11:40:18.604614 systemd[1]: Created slice kubepods-besteffort-pod42534d47_bb9a_459f_a63c_86a4f0de8782.slice - libcontainer container kubepods-besteffort-pod42534d47_bb9a_459f_a63c_86a4f0de8782.slice. Mar 13 11:40:18.617678 kubelet[3404]: I0313 11:40:18.617637 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42534d47-bb9a-459f-a63c-86a4f0de8782-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-6qt4m\" (UID: \"42534d47-bb9a-459f-a63c-86a4f0de8782\") " pod="kube-system/cilium-operator-6f9c7c5859-6qt4m" Mar 13 11:40:18.618155 kubelet[3404]: I0313 11:40:18.617881 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6jq\" (UniqueName: \"kubernetes.io/projected/42534d47-bb9a-459f-a63c-86a4f0de8782-kube-api-access-5c6jq\") pod \"cilium-operator-6f9c7c5859-6qt4m\" (UID: \"42534d47-bb9a-459f-a63c-86a4f0de8782\") " pod="kube-system/cilium-operator-6f9c7c5859-6qt4m" Mar 13 11:40:18.661262 containerd[1906]: time="2026-03-13T11:40:18.661071814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hfnc2,Uid:2af6b45d-7c43-4baa-8b96-02659e1a7ff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\"" Mar 13 11:40:18.664412 containerd[1906]: time="2026-03-13T11:40:18.664113788Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 11:40:18.676756 containerd[1906]: time="2026-03-13T11:40:18.676726831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmn64,Uid:05b7f5ad-9e9e-41ae-ba4b-8bf9c5326305,Namespace:kube-system,Attempt:0,} returns sandbox id \"4af64822351b16151102bea38dae36d6000e7016fe02266eb7f0f8c178353025\"" Mar 13 11:40:18.690373 containerd[1906]: time="2026-03-13T11:40:18.690337320Z" level=info msg="CreateContainer within sandbox \"4af64822351b16151102bea38dae36d6000e7016fe02266eb7f0f8c178353025\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 11:40:18.711923 containerd[1906]: time="2026-03-13T11:40:18.711802011Z" level=info msg="Container 135971e663aaaecf9fb98c59fc58e45a83b443eb12c616561439e1b5f79a2d83: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:18.731903 containerd[1906]: time="2026-03-13T11:40:18.731813061Z" level=info msg="CreateContainer within sandbox \"4af64822351b16151102bea38dae36d6000e7016fe02266eb7f0f8c178353025\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"135971e663aaaecf9fb98c59fc58e45a83b443eb12c616561439e1b5f79a2d83\"" Mar 13 11:40:18.734236 containerd[1906]: time="2026-03-13T11:40:18.733072086Z" level=info msg="StartContainer for \"135971e663aaaecf9fb98c59fc58e45a83b443eb12c616561439e1b5f79a2d83\"" Mar 13 11:40:18.734621 containerd[1906]: time="2026-03-13T11:40:18.734594449Z" level=info msg="connecting to shim 135971e663aaaecf9fb98c59fc58e45a83b443eb12c616561439e1b5f79a2d83" address="unix:///run/containerd/s/4c1a00c56b9039db2dc537be57066645dda4a4cd8eb5b7982f1bd6ad2e509bf6" protocol=ttrpc version=3 Mar 13 11:40:18.750031 systemd[1]: Started cri-containerd-135971e663aaaecf9fb98c59fc58e45a83b443eb12c616561439e1b5f79a2d83.scope - libcontainer container 135971e663aaaecf9fb98c59fc58e45a83b443eb12c616561439e1b5f79a2d83. Mar 13 11:40:18.818613 containerd[1906]: time="2026-03-13T11:40:18.818543842Z" level=info msg="StartContainer for \"135971e663aaaecf9fb98c59fc58e45a83b443eb12c616561439e1b5f79a2d83\" returns successfully" Mar 13 11:40:18.915079 containerd[1906]: time="2026-03-13T11:40:18.915032514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-6qt4m,Uid:42534d47-bb9a-459f-a63c-86a4f0de8782,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:18.949026 containerd[1906]: time="2026-03-13T11:40:18.948982923Z" level=info msg="connecting to shim ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf" address="unix:///run/containerd/s/22e031d04d6b595431f9089bbba28183bd8d24b26438466ec42398277e293b68" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:18.969041 systemd[1]: Started cri-containerd-ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf.scope - libcontainer container ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf. Mar 13 11:40:19.008397 containerd[1906]: time="2026-03-13T11:40:19.008355000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-6qt4m,Uid:42534d47-bb9a-459f-a63c-86a4f0de8782,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\"" Mar 13 11:40:20.872827 kubelet[3404]: I0313 11:40:20.872638 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dmn64" podStartSLOduration=2.8726219349999997 podStartE2EDuration="2.872621935s" podCreationTimestamp="2026-03-13 11:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:40:19.350865504 +0000 UTC m=+7.411875601" watchObservedRunningTime="2026-03-13 11:40:20.872621935 +0000 UTC m=+8.933632048" Mar 13 11:40:24.244077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706424163.mount: Deactivated successfully. Mar 13 11:40:25.591517 containerd[1906]: time="2026-03-13T11:40:25.590996450Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:40:25.593434 containerd[1906]: time="2026-03-13T11:40:25.593409920Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 13 11:40:25.610159 containerd[1906]: time="2026-03-13T11:40:25.610129682Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:40:25.611663 containerd[1906]: time="2026-03-13T11:40:25.611625932Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.946705225s" Mar 13 11:40:25.611663 containerd[1906]: time="2026-03-13T11:40:25.611663598Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 13 11:40:25.613341 containerd[1906]: time="2026-03-13T11:40:25.613148567Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 11:40:25.626256 containerd[1906]: time="2026-03-13T11:40:25.626222036Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 11:40:25.673861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380148681.mount: Deactivated successfully. Mar 13 11:40:25.679910 containerd[1906]: time="2026-03-13T11:40:25.679623793Z" level=info msg="Container 786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:25.984677 containerd[1906]: time="2026-03-13T11:40:25.984552731Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\"" Mar 13 11:40:25.985397 containerd[1906]: time="2026-03-13T11:40:25.985309401Z" level=info msg="StartContainer for \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\"" Mar 13 11:40:25.987710 containerd[1906]: time="2026-03-13T11:40:25.987680661Z" level=info msg="connecting to shim 786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d" address="unix:///run/containerd/s/53f2d37307cb50f3946d61dd3e5f59647a86b86b6f7c2684e26dc7c0a08fa655" protocol=ttrpc version=3 Mar 13 11:40:26.011999 systemd[1]: Started cri-containerd-786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d.scope - libcontainer container 786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d. Mar 13 11:40:26.041560 containerd[1906]: time="2026-03-13T11:40:26.041105211Z" level=info msg="StartContainer for \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\" returns successfully" Mar 13 11:40:26.044140 systemd[1]: cri-containerd-786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d.scope: Deactivated successfully. Mar 13 11:40:26.047643 containerd[1906]: time="2026-03-13T11:40:26.047568430Z" level=info msg="received container exit event container_id:\"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\" id:\"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\" pid:3820 exited_at:{seconds:1773402026 nanos:46267052}" Mar 13 11:40:26.671975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d-rootfs.mount: Deactivated successfully. Mar 13 11:40:28.367185 containerd[1906]: time="2026-03-13T11:40:28.367138593Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 11:40:28.390491 containerd[1906]: time="2026-03-13T11:40:28.390016305Z" level=info msg="Container 20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:28.394315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351259188.mount: Deactivated successfully. Mar 13 11:40:28.418326 containerd[1906]: time="2026-03-13T11:40:28.418230381Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\"" Mar 13 11:40:28.419745 containerd[1906]: time="2026-03-13T11:40:28.419687172Z" level=info msg="StartContainer for \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\"" Mar 13 11:40:28.420503 containerd[1906]: time="2026-03-13T11:40:28.420477147Z" level=info msg="connecting to shim 20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722" address="unix:///run/containerd/s/53f2d37307cb50f3946d61dd3e5f59647a86b86b6f7c2684e26dc7c0a08fa655" protocol=ttrpc version=3 Mar 13 11:40:28.442826 systemd[1]: Started cri-containerd-20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722.scope - libcontainer container 20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722. Mar 13 11:40:28.472919 containerd[1906]: time="2026-03-13T11:40:28.472840655Z" level=info msg="StartContainer for \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\" returns successfully" Mar 13 11:40:28.484669 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 11:40:28.485249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 11:40:28.485777 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 11:40:28.487540 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 11:40:28.487986 systemd[1]: cri-containerd-20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722.scope: Deactivated successfully. Mar 13 11:40:28.491207 containerd[1906]: time="2026-03-13T11:40:28.491133625Z" level=info msg="received container exit event container_id:\"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\" id:\"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\" pid:3863 exited_at:{seconds:1773402028 nanos:490973899}" Mar 13 11:40:28.507913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 11:40:28.841922 containerd[1906]: time="2026-03-13T11:40:28.841425175Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:40:28.844137 containerd[1906]: time="2026-03-13T11:40:28.844112877Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 13 11:40:28.847197 containerd[1906]: time="2026-03-13T11:40:28.847171634Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 11:40:28.849524 containerd[1906]: time="2026-03-13T11:40:28.849497986Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.236323986s" Mar 13 11:40:28.849630 containerd[1906]: time="2026-03-13T11:40:28.849615927Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 13 11:40:28.856753 containerd[1906]: time="2026-03-13T11:40:28.856723870Z" level=info msg="CreateContainer within sandbox \"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 11:40:28.876537 containerd[1906]: time="2026-03-13T11:40:28.876082640Z" level=info msg="Container 5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:28.893049 containerd[1906]: time="2026-03-13T11:40:28.893011478Z" level=info msg="CreateContainer within sandbox \"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\"" Mar 13 11:40:28.894024 containerd[1906]: time="2026-03-13T11:40:28.893998187Z" level=info msg="StartContainer for \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\"" Mar 13 11:40:28.894657 containerd[1906]: time="2026-03-13T11:40:28.894626179Z" level=info msg="connecting to shim 5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692" address="unix:///run/containerd/s/22e031d04d6b595431f9089bbba28183bd8d24b26438466ec42398277e293b68" protocol=ttrpc version=3 Mar 13 11:40:28.914025 systemd[1]: Started cri-containerd-5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692.scope - libcontainer container 5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692. Mar 13 11:40:28.944259 containerd[1906]: time="2026-03-13T11:40:28.944224231Z" level=info msg="StartContainer for \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" returns successfully" Mar 13 11:40:29.376566 containerd[1906]: time="2026-03-13T11:40:29.376525492Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 11:40:29.388747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722-rootfs.mount: Deactivated successfully. Mar 13 11:40:29.406214 containerd[1906]: time="2026-03-13T11:40:29.405388376Z" level=info msg="Container 3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:29.414295 kubelet[3404]: I0313 11:40:29.414245 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-6qt4m" podStartSLOduration=1.573821931 podStartE2EDuration="11.414228257s" podCreationTimestamp="2026-03-13 11:40:18 +0000 UTC" firstStartedPulling="2026-03-13 11:40:19.009828681 +0000 UTC m=+7.070838778" lastFinishedPulling="2026-03-13 11:40:28.850235007 +0000 UTC m=+16.911245104" observedRunningTime="2026-03-13 11:40:29.390749106 +0000 UTC m=+17.451759203" watchObservedRunningTime="2026-03-13 11:40:29.414228257 +0000 UTC m=+17.475238354" Mar 13 11:40:29.423816 containerd[1906]: time="2026-03-13T11:40:29.423779502Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\"" Mar 13 11:40:29.424737 containerd[1906]: time="2026-03-13T11:40:29.424714009Z" level=info msg="StartContainer for \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\"" Mar 13 11:40:29.426195 containerd[1906]: time="2026-03-13T11:40:29.426154824Z" level=info msg="connecting to shim 3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2" address="unix:///run/containerd/s/53f2d37307cb50f3946d61dd3e5f59647a86b86b6f7c2684e26dc7c0a08fa655" protocol=ttrpc version=3 Mar 13 11:40:29.452938 systemd[1]: Started cri-containerd-3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2.scope - libcontainer container 3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2. Mar 13 11:40:29.549358 containerd[1906]: time="2026-03-13T11:40:29.549310392Z" level=info msg="StartContainer for \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\" returns successfully" Mar 13 11:40:29.551750 systemd[1]: cri-containerd-3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2.scope: Deactivated successfully. Mar 13 11:40:29.554133 containerd[1906]: time="2026-03-13T11:40:29.554099535Z" level=info msg="received container exit event container_id:\"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\" id:\"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\" pid:3964 exited_at:{seconds:1773402029 nanos:553427189}" Mar 13 11:40:29.579121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2-rootfs.mount: Deactivated successfully. Mar 13 11:40:30.380422 containerd[1906]: time="2026-03-13T11:40:30.380050383Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 11:40:30.408921 containerd[1906]: time="2026-03-13T11:40:30.407529487Z" level=info msg="Container fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:30.421863 containerd[1906]: time="2026-03-13T11:40:30.421823736Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\"" Mar 13 11:40:30.423035 containerd[1906]: time="2026-03-13T11:40:30.423005741Z" level=info msg="StartContainer for \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\"" Mar 13 11:40:30.423681 containerd[1906]: time="2026-03-13T11:40:30.423654046Z" level=info msg="connecting to shim fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6" address="unix:///run/containerd/s/53f2d37307cb50f3946d61dd3e5f59647a86b86b6f7c2684e26dc7c0a08fa655" protocol=ttrpc version=3 Mar 13 11:40:30.442997 systemd[1]: Started cri-containerd-fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6.scope - libcontainer container fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6. Mar 13 11:40:30.463689 systemd[1]: cri-containerd-fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6.scope: Deactivated successfully. Mar 13 11:40:30.469307 containerd[1906]: time="2026-03-13T11:40:30.469265089Z" level=info msg="received container exit event container_id:\"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\" id:\"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\" pid:4006 exited_at:{seconds:1773402030 nanos:464581263}" Mar 13 11:40:30.474995 containerd[1906]: time="2026-03-13T11:40:30.474954370Z" level=info msg="StartContainer for \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\" returns successfully" Mar 13 11:40:30.485674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6-rootfs.mount: Deactivated successfully. Mar 13 11:40:31.386903 containerd[1906]: time="2026-03-13T11:40:31.385058947Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 11:40:31.406543 containerd[1906]: time="2026-03-13T11:40:31.405093855Z" level=info msg="Container e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:31.420732 containerd[1906]: time="2026-03-13T11:40:31.420521164Z" level=info msg="CreateContainer within sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\"" Mar 13 11:40:31.421663 containerd[1906]: time="2026-03-13T11:40:31.421631382Z" level=info msg="StartContainer for \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\"" Mar 13 11:40:31.423422 containerd[1906]: time="2026-03-13T11:40:31.423397481Z" level=info msg="connecting to shim e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc" address="unix:///run/containerd/s/53f2d37307cb50f3946d61dd3e5f59647a86b86b6f7c2684e26dc7c0a08fa655" protocol=ttrpc version=3 Mar 13 11:40:31.442009 systemd[1]: Started cri-containerd-e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc.scope - libcontainer container e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc. Mar 13 11:40:31.479424 containerd[1906]: time="2026-03-13T11:40:31.479332326Z" level=info msg="StartContainer for \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" returns successfully" Mar 13 11:40:31.633899 kubelet[3404]: I0313 11:40:31.632178 3404 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 11:40:31.683638 systemd[1]: Created slice kubepods-burstable-podcb9ded6f_6cf3_4f09_9467_a205930bb808.slice - libcontainer container kubepods-burstable-podcb9ded6f_6cf3_4f09_9467_a205930bb808.slice. Mar 13 11:40:31.691346 systemd[1]: Created slice kubepods-burstable-podf715b3c0_3417_4b7a_a5ff_e67e36a712e2.slice - libcontainer container kubepods-burstable-podf715b3c0_3417_4b7a_a5ff_e67e36a712e2.slice. Mar 13 11:40:31.700020 kubelet[3404]: I0313 11:40:31.699962 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb9ded6f-6cf3-4f09-9467-a205930bb808-config-volume\") pod \"coredns-66bc5c9577-pxw4m\" (UID: \"cb9ded6f-6cf3-4f09-9467-a205930bb808\") " pod="kube-system/coredns-66bc5c9577-pxw4m" Mar 13 11:40:31.700020 kubelet[3404]: I0313 11:40:31.700021 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f715b3c0-3417-4b7a-a5ff-e67e36a712e2-config-volume\") pod \"coredns-66bc5c9577-fb5gc\" (UID: \"f715b3c0-3417-4b7a-a5ff-e67e36a712e2\") " pod="kube-system/coredns-66bc5c9577-fb5gc" Mar 13 11:40:31.700143 kubelet[3404]: I0313 11:40:31.700034 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85bmp\" (UniqueName: \"kubernetes.io/projected/f715b3c0-3417-4b7a-a5ff-e67e36a712e2-kube-api-access-85bmp\") pod \"coredns-66bc5c9577-fb5gc\" (UID: \"f715b3c0-3417-4b7a-a5ff-e67e36a712e2\") " pod="kube-system/coredns-66bc5c9577-fb5gc" Mar 13 11:40:31.700364 kubelet[3404]: I0313 11:40:31.700154 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcfs\" (UniqueName: \"kubernetes.io/projected/cb9ded6f-6cf3-4f09-9467-a205930bb808-kube-api-access-kdcfs\") pod \"coredns-66bc5c9577-pxw4m\" (UID: \"cb9ded6f-6cf3-4f09-9467-a205930bb808\") " pod="kube-system/coredns-66bc5c9577-pxw4m" Mar 13 11:40:31.995884 containerd[1906]: time="2026-03-13T11:40:31.995517266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pxw4m,Uid:cb9ded6f-6cf3-4f09-9467-a205930bb808,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:32.000847 containerd[1906]: time="2026-03-13T11:40:32.000810628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fb5gc,Uid:f715b3c0-3417-4b7a-a5ff-e67e36a712e2,Namespace:kube-system,Attempt:0,}" Mar 13 11:40:33.505836 systemd-networkd[1489]: cilium_host: Link UP Mar 13 11:40:33.506743 systemd-networkd[1489]: cilium_net: Link UP Mar 13 11:40:33.508234 systemd-networkd[1489]: cilium_net: Gained carrier Mar 13 11:40:33.508375 systemd-networkd[1489]: cilium_host: Gained carrier Mar 13 11:40:33.662035 systemd-networkd[1489]: cilium_vxlan: Link UP Mar 13 11:40:33.662170 systemd-networkd[1489]: cilium_vxlan: Gained carrier Mar 13 11:40:33.797062 systemd-networkd[1489]: cilium_host: Gained IPv6LL Mar 13 11:40:33.960898 kernel: NET: Registered PF_ALG protocol family Mar 13 11:40:34.341039 systemd-networkd[1489]: cilium_net: Gained IPv6LL Mar 13 11:40:34.542160 systemd-networkd[1489]: lxc_health: Link UP Mar 13 11:40:34.547978 systemd-networkd[1489]: lxc_health: Gained carrier Mar 13 11:40:34.981040 systemd-networkd[1489]: cilium_vxlan: Gained IPv6LL Mar 13 11:40:35.053905 kernel: eth0: renamed from tmp69456 Mar 13 11:40:35.062505 systemd-networkd[1489]: lxcdfcce9d29f5f: Link UP Mar 13 11:40:35.068187 systemd-networkd[1489]: lxcdfcce9d29f5f: Gained carrier Mar 13 11:40:35.069640 systemd-networkd[1489]: lxc6e40e2ea446b: Link UP Mar 13 11:40:35.081671 kernel: eth0: renamed from tmpe822f Mar 13 11:40:35.086924 systemd-networkd[1489]: lxc6e40e2ea446b: Gained carrier Mar 13 11:40:35.813034 systemd-networkd[1489]: lxc_health: Gained IPv6LL Mar 13 11:40:36.261069 systemd-networkd[1489]: lxc6e40e2ea446b: Gained IPv6LL Mar 13 11:40:36.476094 kubelet[3404]: I0313 11:40:36.475832 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hfnc2" podStartSLOduration=11.526732877 podStartE2EDuration="18.47581517s" podCreationTimestamp="2026-03-13 11:40:18 +0000 UTC" firstStartedPulling="2026-03-13 11:40:18.663454522 +0000 UTC m=+6.724464619" lastFinishedPulling="2026-03-13 11:40:25.612536815 +0000 UTC m=+13.673546912" observedRunningTime="2026-03-13 11:40:32.399561522 +0000 UTC m=+20.460571659" watchObservedRunningTime="2026-03-13 11:40:36.47581517 +0000 UTC m=+24.536825267" Mar 13 11:40:36.965094 systemd-networkd[1489]: lxcdfcce9d29f5f: Gained IPv6LL Mar 13 11:40:37.718439 containerd[1906]: time="2026-03-13T11:40:37.717825725Z" level=info msg="connecting to shim 694560ebf65cfe718bfdb67f38f2fcbdd9001f93b62bb913d58d750d946b1c8d" address="unix:///run/containerd/s/74ec36b7dd1c16cabd86bd707a3a21f487f5f51c90ba1fa3f7e78739577caec1" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:37.730061 containerd[1906]: time="2026-03-13T11:40:37.730014357Z" level=info msg="connecting to shim e822fe35998106182b5b798a51021c79cc37386db129d39fdec77a4b170f9d6d" address="unix:///run/containerd/s/57c4dfd70d13f353b8cd06d6081b32594c2fab7dc3199ed10ad429a04500fb70" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:40:37.747034 systemd[1]: Started cri-containerd-694560ebf65cfe718bfdb67f38f2fcbdd9001f93b62bb913d58d750d946b1c8d.scope - libcontainer container 694560ebf65cfe718bfdb67f38f2fcbdd9001f93b62bb913d58d750d946b1c8d. Mar 13 11:40:37.752856 systemd[1]: Started cri-containerd-e822fe35998106182b5b798a51021c79cc37386db129d39fdec77a4b170f9d6d.scope - libcontainer container e822fe35998106182b5b798a51021c79cc37386db129d39fdec77a4b170f9d6d. Mar 13 11:40:37.795268 containerd[1906]: time="2026-03-13T11:40:37.795223435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-pxw4m,Uid:cb9ded6f-6cf3-4f09-9467-a205930bb808,Namespace:kube-system,Attempt:0,} returns sandbox id \"694560ebf65cfe718bfdb67f38f2fcbdd9001f93b62bb913d58d750d946b1c8d\"" Mar 13 11:40:37.799982 containerd[1906]: time="2026-03-13T11:40:37.799943938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fb5gc,Uid:f715b3c0-3417-4b7a-a5ff-e67e36a712e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e822fe35998106182b5b798a51021c79cc37386db129d39fdec77a4b170f9d6d\"" Mar 13 11:40:37.807496 containerd[1906]: time="2026-03-13T11:40:37.807354481Z" level=info msg="CreateContainer within sandbox \"694560ebf65cfe718bfdb67f38f2fcbdd9001f93b62bb913d58d750d946b1c8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 11:40:37.817537 containerd[1906]: time="2026-03-13T11:40:37.817461977Z" level=info msg="CreateContainer within sandbox \"e822fe35998106182b5b798a51021c79cc37386db129d39fdec77a4b170f9d6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 11:40:37.837513 containerd[1906]: time="2026-03-13T11:40:37.837319130Z" level=info msg="Container 78530f42cef00d5d8121139b0caaac80ad1302cb28febee071ab773419f7f93f: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:37.845504 containerd[1906]: time="2026-03-13T11:40:37.845455917Z" level=info msg="Container 9e5f2d93c16549b2db08c658d3755f0ad051c8860b83cb490f7350aa82020afa: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:40:37.862448 containerd[1906]: time="2026-03-13T11:40:37.862372140Z" level=info msg="CreateContainer within sandbox \"694560ebf65cfe718bfdb67f38f2fcbdd9001f93b62bb913d58d750d946b1c8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78530f42cef00d5d8121139b0caaac80ad1302cb28febee071ab773419f7f93f\"" Mar 13 11:40:37.865140 containerd[1906]: time="2026-03-13T11:40:37.864973401Z" level=info msg="StartContainer for \"78530f42cef00d5d8121139b0caaac80ad1302cb28febee071ab773419f7f93f\"" Mar 13 11:40:37.866042 containerd[1906]: time="2026-03-13T11:40:37.866013105Z" level=info msg="connecting to shim 78530f42cef00d5d8121139b0caaac80ad1302cb28febee071ab773419f7f93f" address="unix:///run/containerd/s/74ec36b7dd1c16cabd86bd707a3a21f487f5f51c90ba1fa3f7e78739577caec1" protocol=ttrpc version=3 Mar 13 11:40:37.869973 containerd[1906]: time="2026-03-13T11:40:37.869934673Z" level=info msg="CreateContainer within sandbox \"e822fe35998106182b5b798a51021c79cc37386db129d39fdec77a4b170f9d6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e5f2d93c16549b2db08c658d3755f0ad051c8860b83cb490f7350aa82020afa\"" Mar 13 11:40:37.871161 containerd[1906]: time="2026-03-13T11:40:37.870996011Z" level=info msg="StartContainer for \"9e5f2d93c16549b2db08c658d3755f0ad051c8860b83cb490f7350aa82020afa\"" Mar 13 11:40:37.871983 containerd[1906]: time="2026-03-13T11:40:37.871935303Z" level=info msg="connecting to shim 9e5f2d93c16549b2db08c658d3755f0ad051c8860b83cb490f7350aa82020afa" address="unix:///run/containerd/s/57c4dfd70d13f353b8cd06d6081b32594c2fab7dc3199ed10ad429a04500fb70" protocol=ttrpc version=3 Mar 13 11:40:37.889189 systemd[1]: Started cri-containerd-78530f42cef00d5d8121139b0caaac80ad1302cb28febee071ab773419f7f93f.scope - libcontainer container 78530f42cef00d5d8121139b0caaac80ad1302cb28febee071ab773419f7f93f. Mar 13 11:40:37.893499 systemd[1]: Started cri-containerd-9e5f2d93c16549b2db08c658d3755f0ad051c8860b83cb490f7350aa82020afa.scope - libcontainer container 9e5f2d93c16549b2db08c658d3755f0ad051c8860b83cb490f7350aa82020afa. Mar 13 11:40:37.939209 containerd[1906]: time="2026-03-13T11:40:37.939168619Z" level=info msg="StartContainer for \"78530f42cef00d5d8121139b0caaac80ad1302cb28febee071ab773419f7f93f\" returns successfully" Mar 13 11:40:37.939724 containerd[1906]: time="2026-03-13T11:40:37.939672375Z" level=info msg="StartContainer for \"9e5f2d93c16549b2db08c658d3755f0ad051c8860b83cb490f7350aa82020afa\" returns successfully" Mar 13 11:40:38.431098 kubelet[3404]: I0313 11:40:38.431033 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pxw4m" podStartSLOduration=20.431015993 podStartE2EDuration="20.431015993s" podCreationTimestamp="2026-03-13 11:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:40:38.430444626 +0000 UTC m=+26.491454723" watchObservedRunningTime="2026-03-13 11:40:38.431015993 +0000 UTC m=+26.492026130" Mar 13 11:40:38.431999 kubelet[3404]: I0313 11:40:38.431156 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fb5gc" podStartSLOduration=20.431151142 podStartE2EDuration="20.431151142s" podCreationTimestamp="2026-03-13 11:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:40:38.412039097 +0000 UTC m=+26.473049194" watchObservedRunningTime="2026-03-13 11:40:38.431151142 +0000 UTC m=+26.492161271" Mar 13 11:41:41.457484 systemd[1]: Started sshd@7-10.200.20.31:22-10.200.16.10:43094.service - OpenSSH per-connection server daemon (10.200.16.10:43094). Mar 13 11:41:41.868074 sshd[4719]: Accepted publickey for core from 10.200.16.10 port 43094 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:41:41.869209 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:41:41.872810 systemd-logind[1879]: New session 10 of user core. Mar 13 11:41:41.878021 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 11:41:42.150493 sshd[4722]: Connection closed by 10.200.16.10 port 43094 Mar 13 11:41:42.151178 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Mar 13 11:41:42.154726 systemd[1]: sshd@7-10.200.20.31:22-10.200.16.10:43094.service: Deactivated successfully. Mar 13 11:41:42.156858 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 11:41:42.158556 systemd-logind[1879]: Session 10 logged out. Waiting for processes to exit. Mar 13 11:41:42.159986 systemd-logind[1879]: Removed session 10. Mar 13 11:41:47.262820 systemd[1]: Started sshd@8-10.200.20.31:22-10.200.16.10:43104.service - OpenSSH per-connection server daemon (10.200.16.10:43104). Mar 13 11:41:47.678720 sshd[4739]: Accepted publickey for core from 10.200.16.10 port 43104 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:41:47.679826 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:41:47.683701 systemd-logind[1879]: New session 11 of user core. Mar 13 11:41:47.693051 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 11:41:47.951493 sshd[4742]: Connection closed by 10.200.16.10 port 43104 Mar 13 11:41:47.951936 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Mar 13 11:41:47.956127 systemd[1]: sshd@8-10.200.20.31:22-10.200.16.10:43104.service: Deactivated successfully. Mar 13 11:41:47.958521 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 11:41:47.959335 systemd-logind[1879]: Session 11 logged out. Waiting for processes to exit. Mar 13 11:41:47.960840 systemd-logind[1879]: Removed session 11. Mar 13 11:41:53.040921 systemd[1]: Started sshd@9-10.200.20.31:22-10.200.16.10:36388.service - OpenSSH per-connection server daemon (10.200.16.10:36388). Mar 13 11:41:53.458997 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 36388 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:41:53.460485 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:41:53.464277 systemd-logind[1879]: New session 12 of user core. Mar 13 11:41:53.474063 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 11:41:53.731580 sshd[4759]: Connection closed by 10.200.16.10 port 36388 Mar 13 11:41:53.732254 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Mar 13 11:41:53.735852 systemd[1]: sshd@9-10.200.20.31:22-10.200.16.10:36388.service: Deactivated successfully. Mar 13 11:41:53.738015 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 11:41:53.739668 systemd-logind[1879]: Session 12 logged out. Waiting for processes to exit. Mar 13 11:41:53.740767 systemd-logind[1879]: Removed session 12. Mar 13 11:41:58.822824 systemd[1]: Started sshd@10-10.200.20.31:22-10.200.16.10:36392.service - OpenSSH per-connection server daemon (10.200.16.10:36392). Mar 13 11:41:59.241999 sshd[4772]: Accepted publickey for core from 10.200.16.10 port 36392 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:41:59.243545 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:41:59.247569 systemd-logind[1879]: New session 13 of user core. Mar 13 11:41:59.253016 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 11:41:59.519445 sshd[4775]: Connection closed by 10.200.16.10 port 36392 Mar 13 11:41:59.520013 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Mar 13 11:41:59.523472 systemd[1]: sshd@10-10.200.20.31:22-10.200.16.10:36392.service: Deactivated successfully. Mar 13 11:41:59.525169 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 11:41:59.525868 systemd-logind[1879]: Session 13 logged out. Waiting for processes to exit. Mar 13 11:41:59.527385 systemd-logind[1879]: Removed session 13. Mar 13 11:41:59.611329 systemd[1]: Started sshd@11-10.200.20.31:22-10.200.16.10:36400.service - OpenSSH per-connection server daemon (10.200.16.10:36400). Mar 13 11:42:00.022949 sshd[4787]: Accepted publickey for core from 10.200.16.10 port 36400 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:00.024218 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:00.027961 systemd-logind[1879]: New session 14 of user core. Mar 13 11:42:00.036021 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 11:42:00.330295 sshd[4790]: Connection closed by 10.200.16.10 port 36400 Mar 13 11:42:00.330654 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:00.334546 systemd[1]: sshd@11-10.200.20.31:22-10.200.16.10:36400.service: Deactivated successfully. Mar 13 11:42:00.336005 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 11:42:00.337297 systemd-logind[1879]: Session 14 logged out. Waiting for processes to exit. Mar 13 11:42:00.338844 systemd-logind[1879]: Removed session 14. Mar 13 11:42:00.424199 systemd[1]: Started sshd@12-10.200.20.31:22-10.200.16.10:49640.service - OpenSSH per-connection server daemon (10.200.16.10:49640). Mar 13 11:42:00.840682 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 49640 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:00.843380 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:00.848041 systemd-logind[1879]: New session 15 of user core. Mar 13 11:42:00.857052 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 11:42:01.121242 sshd[4803]: Connection closed by 10.200.16.10 port 49640 Mar 13 11:42:01.120420 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:01.123737 systemd[1]: sshd@12-10.200.20.31:22-10.200.16.10:49640.service: Deactivated successfully. Mar 13 11:42:01.126019 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 11:42:01.128416 systemd-logind[1879]: Session 15 logged out. Waiting for processes to exit. Mar 13 11:42:01.129695 systemd-logind[1879]: Removed session 15. Mar 13 11:42:06.209974 systemd[1]: Started sshd@13-10.200.20.31:22-10.200.16.10:49644.service - OpenSSH per-connection server daemon (10.200.16.10:49644). Mar 13 11:42:06.628101 sshd[4815]: Accepted publickey for core from 10.200.16.10 port 49644 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:06.629251 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:06.632860 systemd-logind[1879]: New session 16 of user core. Mar 13 11:42:06.638974 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 11:42:06.904007 sshd[4818]: Connection closed by 10.200.16.10 port 49644 Mar 13 11:42:06.904744 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:06.908249 systemd-logind[1879]: Session 16 logged out. Waiting for processes to exit. Mar 13 11:42:06.908953 systemd[1]: sshd@13-10.200.20.31:22-10.200.16.10:49644.service: Deactivated successfully. Mar 13 11:42:06.911895 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 11:42:06.913452 systemd-logind[1879]: Removed session 16. Mar 13 11:42:06.994708 systemd[1]: Started sshd@14-10.200.20.31:22-10.200.16.10:49650.service - OpenSSH per-connection server daemon (10.200.16.10:49650). Mar 13 11:42:07.414487 sshd[4830]: Accepted publickey for core from 10.200.16.10 port 49650 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:07.416100 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:07.420484 systemd-logind[1879]: New session 17 of user core. Mar 13 11:42:07.428014 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 11:42:07.744421 sshd[4833]: Connection closed by 10.200.16.10 port 49650 Mar 13 11:42:07.745172 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:07.748669 systemd[1]: sshd@14-10.200.20.31:22-10.200.16.10:49650.service: Deactivated successfully. Mar 13 11:42:07.751324 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 11:42:07.752353 systemd-logind[1879]: Session 17 logged out. Waiting for processes to exit. Mar 13 11:42:07.753672 systemd-logind[1879]: Removed session 17. Mar 13 11:42:07.832151 systemd[1]: Started sshd@15-10.200.20.31:22-10.200.16.10:49662.service - OpenSSH per-connection server daemon (10.200.16.10:49662). Mar 13 11:42:08.250303 sshd[4843]: Accepted publickey for core from 10.200.16.10 port 49662 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:08.251971 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:08.256060 systemd-logind[1879]: New session 18 of user core. Mar 13 11:42:08.262053 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 11:42:08.954240 sshd[4846]: Connection closed by 10.200.16.10 port 49662 Mar 13 11:42:08.953447 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:08.957721 systemd[1]: sshd@15-10.200.20.31:22-10.200.16.10:49662.service: Deactivated successfully. Mar 13 11:42:08.959544 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 11:42:08.961819 systemd-logind[1879]: Session 18 logged out. Waiting for processes to exit. Mar 13 11:42:08.963568 systemd-logind[1879]: Removed session 18. Mar 13 11:42:09.043537 systemd[1]: Started sshd@16-10.200.20.31:22-10.200.16.10:49676.service - OpenSSH per-connection server daemon (10.200.16.10:49676). Mar 13 11:42:09.463942 sshd[4861]: Accepted publickey for core from 10.200.16.10 port 49676 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:09.465051 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:09.470933 systemd-logind[1879]: New session 19 of user core. Mar 13 11:42:09.476027 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 11:42:09.824387 sshd[4864]: Connection closed by 10.200.16.10 port 49676 Mar 13 11:42:09.825791 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:09.828801 systemd[1]: sshd@16-10.200.20.31:22-10.200.16.10:49676.service: Deactivated successfully. Mar 13 11:42:09.831207 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 11:42:09.832412 systemd-logind[1879]: Session 19 logged out. Waiting for processes to exit. Mar 13 11:42:09.833606 systemd-logind[1879]: Removed session 19. Mar 13 11:42:09.912072 systemd[1]: Started sshd@17-10.200.20.31:22-10.200.16.10:41782.service - OpenSSH per-connection server daemon (10.200.16.10:41782). Mar 13 11:42:10.329841 sshd[4875]: Accepted publickey for core from 10.200.16.10 port 41782 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:10.331025 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:10.334967 systemd-logind[1879]: New session 20 of user core. Mar 13 11:42:10.344999 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 11:42:10.602504 sshd[4878]: Connection closed by 10.200.16.10 port 41782 Mar 13 11:42:10.602947 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:10.606957 systemd[1]: sshd@17-10.200.20.31:22-10.200.16.10:41782.service: Deactivated successfully. Mar 13 11:42:10.609149 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 11:42:10.610060 systemd-logind[1879]: Session 20 logged out. Waiting for processes to exit. Mar 13 11:42:10.611684 systemd-logind[1879]: Removed session 20. Mar 13 11:42:15.691261 systemd[1]: Started sshd@18-10.200.20.31:22-10.200.16.10:41796.service - OpenSSH per-connection server daemon (10.200.16.10:41796). Mar 13 11:42:16.112819 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 41796 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:16.114264 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:16.119579 systemd-logind[1879]: New session 21 of user core. Mar 13 11:42:16.128117 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 11:42:16.388352 sshd[4895]: Connection closed by 10.200.16.10 port 41796 Mar 13 11:42:16.388902 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:16.392773 systemd[1]: sshd@18-10.200.20.31:22-10.200.16.10:41796.service: Deactivated successfully. Mar 13 11:42:16.394359 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 11:42:16.395195 systemd-logind[1879]: Session 21 logged out. Waiting for processes to exit. Mar 13 11:42:16.396477 systemd-logind[1879]: Removed session 21. Mar 13 11:42:21.473833 systemd[1]: Started sshd@19-10.200.20.31:22-10.200.16.10:54276.service - OpenSSH per-connection server daemon (10.200.16.10:54276). Mar 13 11:42:21.890350 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 54276 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:21.891925 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:21.896595 systemd-logind[1879]: New session 22 of user core. Mar 13 11:42:21.904990 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 11:42:22.161377 sshd[4914]: Connection closed by 10.200.16.10 port 54276 Mar 13 11:42:22.162429 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:22.165812 systemd[1]: sshd@19-10.200.20.31:22-10.200.16.10:54276.service: Deactivated successfully. Mar 13 11:42:22.168287 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 11:42:22.170994 systemd-logind[1879]: Session 22 logged out. Waiting for processes to exit. Mar 13 11:42:22.172507 systemd-logind[1879]: Removed session 22. Mar 13 11:42:27.251268 systemd[1]: Started sshd@20-10.200.20.31:22-10.200.16.10:54290.service - OpenSSH per-connection server daemon (10.200.16.10:54290). Mar 13 11:42:27.664938 sshd[4926]: Accepted publickey for core from 10.200.16.10 port 54290 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:27.665934 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:27.669624 systemd-logind[1879]: New session 23 of user core. Mar 13 11:42:27.677037 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 11:42:27.937978 sshd[4929]: Connection closed by 10.200.16.10 port 54290 Mar 13 11:42:27.937773 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:27.942409 systemd[1]: sshd@20-10.200.20.31:22-10.200.16.10:54290.service: Deactivated successfully. Mar 13 11:42:27.944951 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 11:42:27.946427 systemd-logind[1879]: Session 23 logged out. Waiting for processes to exit. Mar 13 11:42:27.948221 systemd-logind[1879]: Removed session 23. Mar 13 11:42:28.028527 systemd[1]: Started sshd@21-10.200.20.31:22-10.200.16.10:54298.service - OpenSSH per-connection server daemon (10.200.16.10:54298). Mar 13 11:42:28.450773 sshd[4941]: Accepted publickey for core from 10.200.16.10 port 54298 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:28.451640 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:28.456043 systemd-logind[1879]: New session 24 of user core. Mar 13 11:42:28.462011 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 11:42:30.155109 containerd[1906]: time="2026-03-13T11:42:30.155063612Z" level=info msg="StopContainer for \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" with timeout 30 (s)" Mar 13 11:42:30.155760 containerd[1906]: time="2026-03-13T11:42:30.155427882Z" level=info msg="Stop container \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" with signal terminated" Mar 13 11:42:30.167099 containerd[1906]: time="2026-03-13T11:42:30.167048940Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 11:42:30.174552 containerd[1906]: time="2026-03-13T11:42:30.174511233Z" level=info msg="StopContainer for \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" with timeout 2 (s)" Mar 13 11:42:30.174833 containerd[1906]: time="2026-03-13T11:42:30.174812124Z" level=info msg="Stop container \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" with signal terminated" Mar 13 11:42:30.184346 systemd-networkd[1489]: lxc_health: Link DOWN Mar 13 11:42:30.184354 systemd-networkd[1489]: lxc_health: Lost carrier Mar 13 11:42:30.187037 systemd[1]: cri-containerd-5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692.scope: Deactivated successfully. Mar 13 11:42:30.191430 containerd[1906]: time="2026-03-13T11:42:30.191323681Z" level=info msg="received container exit event container_id:\"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" id:\"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" pid:3926 exited_at:{seconds:1773402150 nanos:188832658}" Mar 13 11:42:30.205390 systemd[1]: cri-containerd-e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc.scope: Deactivated successfully. Mar 13 11:42:30.205643 systemd[1]: cri-containerd-e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc.scope: Consumed 4.504s CPU time, 124.8M memory peak, 128K read from disk, 12.9M written to disk. Mar 13 11:42:30.207839 containerd[1906]: time="2026-03-13T11:42:30.207781380Z" level=info msg="received container exit event container_id:\"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" id:\"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" pid:4042 exited_at:{seconds:1773402150 nanos:207595861}" Mar 13 11:42:30.213850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692-rootfs.mount: Deactivated successfully. Mar 13 11:42:30.227990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc-rootfs.mount: Deactivated successfully. Mar 13 11:42:30.286251 containerd[1906]: time="2026-03-13T11:42:30.286188511Z" level=info msg="StopContainer for \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" returns successfully" Mar 13 11:42:30.287843 containerd[1906]: time="2026-03-13T11:42:30.287748186Z" level=info msg="StopPodSandbox for \"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\"" Mar 13 11:42:30.288317 containerd[1906]: time="2026-03-13T11:42:30.287922105Z" level=info msg="Container to stop \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 11:42:30.290826 containerd[1906]: time="2026-03-13T11:42:30.290742228Z" level=info msg="StopContainer for \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" returns successfully" Mar 13 11:42:30.291452 containerd[1906]: time="2026-03-13T11:42:30.291199422Z" level=info msg="StopPodSandbox for \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\"" Mar 13 11:42:30.291452 containerd[1906]: time="2026-03-13T11:42:30.291243751Z" level=info msg="Container to stop \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 11:42:30.291452 containerd[1906]: time="2026-03-13T11:42:30.291251551Z" level=info msg="Container to stop \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 11:42:30.291452 containerd[1906]: time="2026-03-13T11:42:30.291256952Z" level=info msg="Container to stop \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 11:42:30.291452 containerd[1906]: time="2026-03-13T11:42:30.291262568Z" level=info msg="Container to stop \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 11:42:30.291452 containerd[1906]: time="2026-03-13T11:42:30.291267440Z" level=info msg="Container to stop \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 11:42:30.295473 systemd[1]: cri-containerd-ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf.scope: Deactivated successfully. Mar 13 11:42:30.297265 systemd[1]: cri-containerd-ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96.scope: Deactivated successfully. Mar 13 11:42:30.299722 containerd[1906]: time="2026-03-13T11:42:30.299673816Z" level=info msg="received sandbox exit event container_id:\"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" id:\"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" exit_status:137 exited_at:{seconds:1773402150 nanos:299499314}" monitor_name=podsandbox Mar 13 11:42:30.303808 containerd[1906]: time="2026-03-13T11:42:30.303783245Z" level=info msg="received sandbox exit event container_id:\"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\" id:\"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\" exit_status:137 exited_at:{seconds:1773402150 nanos:303331364}" monitor_name=podsandbox Mar 13 11:42:30.322010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96-rootfs.mount: Deactivated successfully. Mar 13 11:42:30.327722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf-rootfs.mount: Deactivated successfully. Mar 13 11:42:30.337601 containerd[1906]: time="2026-03-13T11:42:30.337540467Z" level=info msg="shim disconnected" id=ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96 namespace=k8s.io Mar 13 11:42:30.337755 containerd[1906]: time="2026-03-13T11:42:30.337644575Z" level=warning msg="cleaning up after shim disconnected" id=ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96 namespace=k8s.io Mar 13 11:42:30.337755 containerd[1906]: time="2026-03-13T11:42:30.337675896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 11:42:30.338765 containerd[1906]: time="2026-03-13T11:42:30.338616692Z" level=info msg="shim disconnected" id=ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf namespace=k8s.io Mar 13 11:42:30.338765 containerd[1906]: time="2026-03-13T11:42:30.338639477Z" level=warning msg="cleaning up after shim disconnected" id=ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf namespace=k8s.io Mar 13 11:42:30.338765 containerd[1906]: time="2026-03-13T11:42:30.338663413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 11:42:30.348908 containerd[1906]: time="2026-03-13T11:42:30.348835177Z" level=info msg="received sandbox container exit event sandbox_id:\"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\" exit_status:137 exited_at:{seconds:1773402150 nanos:303331364}" monitor_name=criService Mar 13 11:42:30.350262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96-shm.mount: Deactivated successfully. Mar 13 11:42:30.350907 containerd[1906]: time="2026-03-13T11:42:30.348836265Z" level=info msg="received sandbox container exit event sandbox_id:\"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" exit_status:137 exited_at:{seconds:1773402150 nanos:299499314}" monitor_name=criService Mar 13 11:42:30.350907 containerd[1906]: time="2026-03-13T11:42:30.350451895Z" level=info msg="TearDown network for sandbox \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" successfully" Mar 13 11:42:30.350907 containerd[1906]: time="2026-03-13T11:42:30.350470319Z" level=info msg="StopPodSandbox for \"ad42f4b3b45446144abdace3be7073137df83ece346959dc83d01a302ec0fb96\" returns successfully" Mar 13 11:42:30.351554 containerd[1906]: time="2026-03-13T11:42:30.351525015Z" level=info msg="TearDown network for sandbox \"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\" successfully" Mar 13 11:42:30.351554 containerd[1906]: time="2026-03-13T11:42:30.351546832Z" level=info msg="StopPodSandbox for \"ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf\" returns successfully" Mar 13 11:42:30.466452 kubelet[3404]: I0313 11:42:30.466084 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxf8q\" (UniqueName: \"kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-kube-api-access-bxf8q\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.466452 kubelet[3404]: I0313 11:42:30.466236 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42534d47-bb9a-459f-a63c-86a4f0de8782-cilium-config-path\") pod \"42534d47-bb9a-459f-a63c-86a4f0de8782\" (UID: \"42534d47-bb9a-459f-a63c-86a4f0de8782\") " Mar 13 11:42:30.467113 kubelet[3404]: I0313 11:42:30.466510 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-run\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467113 kubelet[3404]: I0313 11:42:30.466539 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-kernel\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467113 kubelet[3404]: I0313 11:42:30.466553 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cni-path\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467113 kubelet[3404]: I0313 11:42:30.466564 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-bpf-maps\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467551 kubelet[3404]: I0313 11:42:30.467215 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-lib-modules\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467551 kubelet[3404]: I0313 11:42:30.467246 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-clustermesh-secrets\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467551 kubelet[3404]: I0313 11:42:30.467259 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-xtables-lock\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467551 kubelet[3404]: I0313 11:42:30.467442 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hubble-tls\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467551 kubelet[3404]: I0313 11:42:30.467458 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-net\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467551 kubelet[3404]: I0313 11:42:30.467471 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c6jq\" (UniqueName: \"kubernetes.io/projected/42534d47-bb9a-459f-a63c-86a4f0de8782-kube-api-access-5c6jq\") pod \"42534d47-bb9a-459f-a63c-86a4f0de8782\" (UID: \"42534d47-bb9a-459f-a63c-86a4f0de8782\") " Mar 13 11:42:30.467788 kubelet[3404]: I0313 11:42:30.467481 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-cgroup\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467788 kubelet[3404]: I0313 11:42:30.467491 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-config-path\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467788 kubelet[3404]: I0313 11:42:30.467499 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hostproc\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467788 kubelet[3404]: I0313 11:42:30.467599 3404 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-etc-cni-netd\") pod \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\" (UID: \"2af6b45d-7c43-4baa-8b96-02659e1a7ff6\") " Mar 13 11:42:30.467979 kubelet[3404]: I0313 11:42:30.467652 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.468962 kubelet[3404]: I0313 11:42:30.468937 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.469117 kubelet[3404]: I0313 11:42:30.468969 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.469117 kubelet[3404]: I0313 11:42:30.468980 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.469117 kubelet[3404]: I0313 11:42:30.468992 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cni-path" (OuterVolumeSpecName: "cni-path") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.469117 kubelet[3404]: I0313 11:42:30.469000 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.469117 kubelet[3404]: I0313 11:42:30.469008 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.470241 kubelet[3404]: I0313 11:42:30.470215 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42534d47-bb9a-459f-a63c-86a4f0de8782-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42534d47-bb9a-459f-a63c-86a4f0de8782" (UID: "42534d47-bb9a-459f-a63c-86a4f0de8782"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 11:42:30.470749 kubelet[3404]: I0313 11:42:30.470364 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.471070 kubelet[3404]: I0313 11:42:30.470849 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hostproc" (OuterVolumeSpecName: "hostproc") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.471070 kubelet[3404]: I0313 11:42:30.471055 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 11:42:30.471698 kubelet[3404]: I0313 11:42:30.471672 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-kube-api-access-bxf8q" (OuterVolumeSpecName: "kube-api-access-bxf8q") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "kube-api-access-bxf8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 11:42:30.472076 kubelet[3404]: I0313 11:42:30.472050 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 11:42:30.472911 kubelet[3404]: I0313 11:42:30.472892 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42534d47-bb9a-459f-a63c-86a4f0de8782-kube-api-access-5c6jq" (OuterVolumeSpecName: "kube-api-access-5c6jq") pod "42534d47-bb9a-459f-a63c-86a4f0de8782" (UID: "42534d47-bb9a-459f-a63c-86a4f0de8782"). InnerVolumeSpecName "kube-api-access-5c6jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 11:42:30.473809 kubelet[3404]: I0313 11:42:30.473776 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 11:42:30.474498 kubelet[3404]: I0313 11:42:30.474474 3404 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2af6b45d-7c43-4baa-8b96-02659e1a7ff6" (UID: "2af6b45d-7c43-4baa-8b96-02659e1a7ff6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 11:42:30.567784 kubelet[3404]: I0313 11:42:30.567743 3404 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42534d47-bb9a-459f-a63c-86a4f0de8782-cilium-config-path\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.567784 kubelet[3404]: I0313 11:42:30.567778 3404 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-run\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.567784 kubelet[3404]: I0313 11:42:30.567786 3404 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-kernel\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.567784 kubelet[3404]: I0313 11:42:30.567795 3404 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cni-path\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.567784 kubelet[3404]: I0313 11:42:30.567803 3404 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-bpf-maps\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567808 3404 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-lib-modules\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567815 3404 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-clustermesh-secrets\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567819 3404 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-xtables-lock\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567825 3404 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hubble-tls\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567829 3404 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-host-proc-sys-net\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567834 3404 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5c6jq\" (UniqueName: \"kubernetes.io/projected/42534d47-bb9a-459f-a63c-86a4f0de8782-kube-api-access-5c6jq\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567839 3404 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-cgroup\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568027 kubelet[3404]: I0313 11:42:30.567844 3404 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-cilium-config-path\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568148 kubelet[3404]: I0313 11:42:30.567849 3404 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-hostproc\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568148 kubelet[3404]: I0313 11:42:30.567855 3404 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-etc-cni-netd\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.568148 kubelet[3404]: I0313 11:42:30.567861 3404 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxf8q\" (UniqueName: \"kubernetes.io/projected/2af6b45d-7c43-4baa-8b96-02659e1a7ff6-kube-api-access-bxf8q\") on node \"ci-4459.2.101-83511db97f\" DevicePath \"\"" Mar 13 11:42:30.598011 kubelet[3404]: I0313 11:42:30.597986 3404 scope.go:117] "RemoveContainer" containerID="5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692" Mar 13 11:42:30.602324 containerd[1906]: time="2026-03-13T11:42:30.602160274Z" level=info msg="RemoveContainer for \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\"" Mar 13 11:42:30.602981 systemd[1]: Removed slice kubepods-besteffort-pod42534d47_bb9a_459f_a63c_86a4f0de8782.slice - libcontainer container kubepods-besteffort-pod42534d47_bb9a_459f_a63c_86a4f0de8782.slice. Mar 13 11:42:30.612328 containerd[1906]: time="2026-03-13T11:42:30.612160663Z" level=info msg="RemoveContainer for \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" returns successfully" Mar 13 11:42:30.612897 kubelet[3404]: I0313 11:42:30.612814 3404 scope.go:117] "RemoveContainer" containerID="5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692" Mar 13 11:42:30.613109 systemd[1]: Removed slice kubepods-burstable-pod2af6b45d_7c43_4baa_8b96_02659e1a7ff6.slice - libcontainer container kubepods-burstable-pod2af6b45d_7c43_4baa_8b96_02659e1a7ff6.slice. Mar 13 11:42:30.613197 systemd[1]: kubepods-burstable-pod2af6b45d_7c43_4baa_8b96_02659e1a7ff6.slice: Consumed 4.572s CPU time, 125.3M memory peak, 128K read from disk, 12.9M written to disk. Mar 13 11:42:30.613740 containerd[1906]: time="2026-03-13T11:42:30.613460209Z" level=error msg="ContainerStatus for \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\": not found" Mar 13 11:42:30.613974 kubelet[3404]: E0313 11:42:30.613951 3404 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\": not found" containerID="5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692" Mar 13 11:42:30.614038 kubelet[3404]: I0313 11:42:30.613975 3404 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692"} err="failed to get container status \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a7ba3e80ff78eac289d10de2685f55c9cb43735a002592aee9213b5f58b8692\": not found" Mar 13 11:42:30.614038 kubelet[3404]: I0313 11:42:30.614002 3404 scope.go:117] "RemoveContainer" containerID="e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc" Mar 13 11:42:30.616756 containerd[1906]: time="2026-03-13T11:42:30.616730437Z" level=info msg="RemoveContainer for \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\"" Mar 13 11:42:30.627298 containerd[1906]: time="2026-03-13T11:42:30.627211453Z" level=info msg="RemoveContainer for \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" returns successfully" Mar 13 11:42:30.627513 kubelet[3404]: I0313 11:42:30.627485 3404 scope.go:117] "RemoveContainer" containerID="fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6" Mar 13 11:42:30.628788 containerd[1906]: time="2026-03-13T11:42:30.628700109Z" level=info msg="RemoveContainer for \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\"" Mar 13 11:42:30.638257 containerd[1906]: time="2026-03-13T11:42:30.638211896Z" level=info msg="RemoveContainer for \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\" returns successfully" Mar 13 11:42:30.638541 kubelet[3404]: I0313 11:42:30.638514 3404 scope.go:117] "RemoveContainer" containerID="3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2" Mar 13 11:42:30.640928 containerd[1906]: time="2026-03-13T11:42:30.640467630Z" level=info msg="RemoveContainer for \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\"" Mar 13 11:42:30.647903 containerd[1906]: time="2026-03-13T11:42:30.647849663Z" level=info msg="RemoveContainer for \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\" returns successfully" Mar 13 11:42:30.648267 kubelet[3404]: I0313 11:42:30.648135 3404 scope.go:117] "RemoveContainer" containerID="20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722" Mar 13 11:42:30.649796 containerd[1906]: time="2026-03-13T11:42:30.649772040Z" level=info msg="RemoveContainer for \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\"" Mar 13 11:42:30.657337 containerd[1906]: time="2026-03-13T11:42:30.657302399Z" level=info msg="RemoveContainer for \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\" returns successfully" Mar 13 11:42:30.659113 kubelet[3404]: I0313 11:42:30.659025 3404 scope.go:117] "RemoveContainer" containerID="786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d" Mar 13 11:42:30.662075 containerd[1906]: time="2026-03-13T11:42:30.662054212Z" level=info msg="RemoveContainer for \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\"" Mar 13 11:42:30.669859 containerd[1906]: time="2026-03-13T11:42:30.669789723Z" level=info msg="RemoveContainer for \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\" returns successfully" Mar 13 11:42:30.670074 kubelet[3404]: I0313 11:42:30.670044 3404 scope.go:117] "RemoveContainer" containerID="e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc" Mar 13 11:42:30.670249 containerd[1906]: time="2026-03-13T11:42:30.670221083Z" level=error msg="ContainerStatus for \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\": not found" Mar 13 11:42:30.670458 kubelet[3404]: E0313 11:42:30.670394 3404 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\": not found" containerID="e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc" Mar 13 11:42:30.670458 kubelet[3404]: I0313 11:42:30.670417 3404 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc"} err="failed to get container status \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"e833588e902733a63f1c15e5131ebd986ae4b9b4c5d6636892bc6c31425b21cc\": not found" Mar 13 11:42:30.670458 kubelet[3404]: I0313 11:42:30.670435 3404 scope.go:117] "RemoveContainer" containerID="fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6" Mar 13 11:42:30.670734 containerd[1906]: time="2026-03-13T11:42:30.670697293Z" level=error msg="ContainerStatus for \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\": not found" Mar 13 11:42:30.670958 kubelet[3404]: E0313 11:42:30.670843 3404 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\": not found" containerID="fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6" Mar 13 11:42:30.670958 kubelet[3404]: I0313 11:42:30.670862 3404 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6"} err="failed to get container status \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd0c71d156cab8ba2a6b2fee388ad89a40a5361e1c83d4b03c204044b194beb6\": not found" Mar 13 11:42:30.670958 kubelet[3404]: I0313 11:42:30.670893 3404 scope.go:117] "RemoveContainer" containerID="3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2" Mar 13 11:42:30.671170 containerd[1906]: time="2026-03-13T11:42:30.671143518Z" level=error msg="ContainerStatus for \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\": not found" Mar 13 11:42:30.671364 kubelet[3404]: E0313 11:42:30.671339 3404 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\": not found" containerID="3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2" Mar 13 11:42:30.671455 kubelet[3404]: I0313 11:42:30.671362 3404 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2"} err="failed to get container status \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bcbfbcaeff113ea2dfa2e52a8f304c9bf40420116e9848a9d06bc5cd4c57ce2\": not found" Mar 13 11:42:30.671455 kubelet[3404]: I0313 11:42:30.671375 3404 scope.go:117] "RemoveContainer" containerID="20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722" Mar 13 11:42:30.671564 containerd[1906]: time="2026-03-13T11:42:30.671512236Z" level=error msg="ContainerStatus for \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\": not found" Mar 13 11:42:30.671734 kubelet[3404]: E0313 11:42:30.671715 3404 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\": not found" containerID="20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722" Mar 13 11:42:30.671822 kubelet[3404]: I0313 11:42:30.671807 3404 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722"} err="failed to get container status \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\": rpc error: code = NotFound desc = an error occurred when try to find container \"20f2a4dc1f00efd3542b2e7e603a46deedfdbe544214e271a2ac9b59acae1722\": not found" Mar 13 11:42:30.671967 kubelet[3404]: I0313 11:42:30.671890 3404 scope.go:117] "RemoveContainer" containerID="786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d" Mar 13 11:42:30.672068 containerd[1906]: time="2026-03-13T11:42:30.672025408Z" level=error msg="ContainerStatus for \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\": not found" Mar 13 11:42:30.672191 kubelet[3404]: E0313 11:42:30.672174 3404 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\": not found" containerID="786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d" Mar 13 11:42:30.672268 kubelet[3404]: I0313 11:42:30.672249 3404 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d"} err="failed to get container status \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"786ba992a36678f5b83a02fb60b1780b9be2c451a209f6e8b1f59ea5aed37d0d\": not found" Mar 13 11:42:31.214162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed1595e009ea2450c307f246ea8a00195c3e725ecd877d5cdc0ee324e4043fdf-shm.mount: Deactivated successfully. Mar 13 11:42:31.214250 systemd[1]: var-lib-kubelet-pods-42534d47\x2dbb9a\x2d459f\x2da63c\x2d86a4f0de8782-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5c6jq.mount: Deactivated successfully. Mar 13 11:42:31.214295 systemd[1]: var-lib-kubelet-pods-2af6b45d\x2d7c43\x2d4baa\x2d8b96\x2d02659e1a7ff6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbxf8q.mount: Deactivated successfully. Mar 13 11:42:31.214342 systemd[1]: var-lib-kubelet-pods-2af6b45d\x2d7c43\x2d4baa\x2d8b96\x2d02659e1a7ff6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 11:42:31.214377 systemd[1]: var-lib-kubelet-pods-2af6b45d\x2d7c43\x2d4baa\x2d8b96\x2d02659e1a7ff6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 11:42:32.166534 sshd[4944]: Connection closed by 10.200.16.10 port 54298 Mar 13 11:42:32.167004 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:32.170759 systemd-logind[1879]: Session 24 logged out. Waiting for processes to exit. Mar 13 11:42:32.171651 systemd[1]: sshd@21-10.200.20.31:22-10.200.16.10:54298.service: Deactivated successfully. Mar 13 11:42:32.175413 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 11:42:32.177826 systemd-logind[1879]: Removed session 24. Mar 13 11:42:32.255898 systemd[1]: Started sshd@22-10.200.20.31:22-10.200.16.10:39540.service - OpenSSH per-connection server daemon (10.200.16.10:39540). Mar 13 11:42:32.300272 kubelet[3404]: I0313 11:42:32.300225 3404 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af6b45d-7c43-4baa-8b96-02659e1a7ff6" path="/var/lib/kubelet/pods/2af6b45d-7c43-4baa-8b96-02659e1a7ff6/volumes" Mar 13 11:42:32.300631 kubelet[3404]: I0313 11:42:32.300608 3404 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42534d47-bb9a-459f-a63c-86a4f0de8782" path="/var/lib/kubelet/pods/42534d47-bb9a-459f-a63c-86a4f0de8782/volumes" Mar 13 11:42:32.376423 kubelet[3404]: E0313 11:42:32.376360 3404 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 11:42:32.681036 sshd[5091]: Accepted publickey for core from 10.200.16.10 port 39540 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:32.682314 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:32.686495 systemd-logind[1879]: New session 25 of user core. Mar 13 11:42:32.692011 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 11:42:33.290355 systemd[1]: Created slice kubepods-burstable-pod91aabe79_1bc1_467d_87ba_7a89817acc01.slice - libcontainer container kubepods-burstable-pod91aabe79_1bc1_467d_87ba_7a89817acc01.slice. Mar 13 11:42:33.300898 sshd[5094]: Connection closed by 10.200.16.10 port 39540 Mar 13 11:42:33.300234 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:33.304829 systemd-logind[1879]: Session 25 logged out. Waiting for processes to exit. Mar 13 11:42:33.305628 systemd[1]: sshd@22-10.200.20.31:22-10.200.16.10:39540.service: Deactivated successfully. Mar 13 11:42:33.310193 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 11:42:33.312671 systemd-logind[1879]: Removed session 25. Mar 13 11:42:33.384021 kubelet[3404]: I0313 11:42:33.383980 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-cni-path\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384021 kubelet[3404]: I0313 11:42:33.384017 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-lib-modules\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384313 kubelet[3404]: I0313 11:42:33.384035 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-host-proc-sys-kernel\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384313 kubelet[3404]: I0313 11:42:33.384046 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-cilium-run\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384313 kubelet[3404]: I0313 11:42:33.384057 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-etc-cni-netd\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384313 kubelet[3404]: I0313 11:42:33.384067 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-host-proc-sys-net\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384313 kubelet[3404]: I0313 11:42:33.384077 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnsc\" (UniqueName: \"kubernetes.io/projected/91aabe79-1bc1-467d-87ba-7a89817acc01-kube-api-access-5vnsc\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384313 kubelet[3404]: I0313 11:42:33.384089 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-bpf-maps\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384436 kubelet[3404]: I0313 11:42:33.384108 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-xtables-lock\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384436 kubelet[3404]: I0313 11:42:33.384119 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-hostproc\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384436 kubelet[3404]: I0313 11:42:33.384127 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91aabe79-1bc1-467d-87ba-7a89817acc01-cilium-cgroup\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384436 kubelet[3404]: I0313 11:42:33.384136 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91aabe79-1bc1-467d-87ba-7a89817acc01-clustermesh-secrets\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384436 kubelet[3404]: I0313 11:42:33.384145 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91aabe79-1bc1-467d-87ba-7a89817acc01-cilium-config-path\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384436 kubelet[3404]: I0313 11:42:33.384153 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91aabe79-1bc1-467d-87ba-7a89817acc01-cilium-ipsec-secrets\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.384525 kubelet[3404]: I0313 11:42:33.384163 3404 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91aabe79-1bc1-467d-87ba-7a89817acc01-hubble-tls\") pod \"cilium-wngjj\" (UID: \"91aabe79-1bc1-467d-87ba-7a89817acc01\") " pod="kube-system/cilium-wngjj" Mar 13 11:42:33.387087 systemd[1]: Started sshd@23-10.200.20.31:22-10.200.16.10:39556.service - OpenSSH per-connection server daemon (10.200.16.10:39556). Mar 13 11:42:33.604328 containerd[1906]: time="2026-03-13T11:42:33.604217980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wngjj,Uid:91aabe79-1bc1-467d-87ba-7a89817acc01,Namespace:kube-system,Attempt:0,}" Mar 13 11:42:33.635755 containerd[1906]: time="2026-03-13T11:42:33.635717436Z" level=info msg="connecting to shim 1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218" address="unix:///run/containerd/s/4737063f2dc06fceab31fd21d165eda2d57efc8beb4ba17ce1e9389830819726" namespace=k8s.io protocol=ttrpc version=3 Mar 13 11:42:33.657005 systemd[1]: Started cri-containerd-1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218.scope - libcontainer container 1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218. Mar 13 11:42:33.678121 containerd[1906]: time="2026-03-13T11:42:33.678028264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wngjj,Uid:91aabe79-1bc1-467d-87ba-7a89817acc01,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\"" Mar 13 11:42:33.686235 containerd[1906]: time="2026-03-13T11:42:33.686196823Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 11:42:33.704804 containerd[1906]: time="2026-03-13T11:42:33.704343234Z" level=info msg="Container 0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:42:33.716789 containerd[1906]: time="2026-03-13T11:42:33.716753691Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e\"" Mar 13 11:42:33.717661 containerd[1906]: time="2026-03-13T11:42:33.717642677Z" level=info msg="StartContainer for \"0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e\"" Mar 13 11:42:33.718627 containerd[1906]: time="2026-03-13T11:42:33.718606233Z" level=info msg="connecting to shim 0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e" address="unix:///run/containerd/s/4737063f2dc06fceab31fd21d165eda2d57efc8beb4ba17ce1e9389830819726" protocol=ttrpc version=3 Mar 13 11:42:33.737008 systemd[1]: Started cri-containerd-0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e.scope - libcontainer container 0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e. Mar 13 11:42:33.765053 containerd[1906]: time="2026-03-13T11:42:33.765008137Z" level=info msg="StartContainer for \"0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e\" returns successfully" Mar 13 11:42:33.768988 systemd[1]: cri-containerd-0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e.scope: Deactivated successfully. Mar 13 11:42:33.773173 sshd[5104]: Accepted publickey for core from 10.200.16.10 port 39556 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:33.773664 containerd[1906]: time="2026-03-13T11:42:33.773098741Z" level=info msg="received container exit event container_id:\"0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e\" id:\"0ec6c79ecf819036e27023b21cc4068f65d93777e0f1bdd489098961cd650d4e\" pid:5170 exited_at:{seconds:1773402153 nanos:772600482}" Mar 13 11:42:33.775496 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:33.781861 systemd-logind[1879]: New session 26 of user core. Mar 13 11:42:33.784111 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 11:42:33.985188 sshd[5201]: Connection closed by 10.200.16.10 port 39556 Mar 13 11:42:33.985890 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:33.989552 systemd[1]: sshd@23-10.200.20.31:22-10.200.16.10:39556.service: Deactivated successfully. Mar 13 11:42:33.991645 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 11:42:33.992655 systemd-logind[1879]: Session 26 logged out. Waiting for processes to exit. Mar 13 11:42:33.994606 systemd-logind[1879]: Removed session 26. Mar 13 11:42:34.072130 systemd[1]: Started sshd@24-10.200.20.31:22-10.200.16.10:39568.service - OpenSSH per-connection server daemon (10.200.16.10:39568). Mar 13 11:42:34.485296 sshd[5208]: Accepted publickey for core from 10.200.16.10 port 39568 ssh2: RSA SHA256:Apk34A4LT0joUbgGoxh5C2fzRqSYW+1k4GGN9OaA4z0 Mar 13 11:42:34.486344 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 11:42:34.489106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754657767.mount: Deactivated successfully. Mar 13 11:42:34.492284 systemd-logind[1879]: New session 27 of user core. Mar 13 11:42:34.499995 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 11:42:34.627737 containerd[1906]: time="2026-03-13T11:42:34.627692690Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 11:42:34.648623 containerd[1906]: time="2026-03-13T11:42:34.648580514Z" level=info msg="Container 13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:42:34.653350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643039384.mount: Deactivated successfully. Mar 13 11:42:34.666450 containerd[1906]: time="2026-03-13T11:42:34.666410917Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908\"" Mar 13 11:42:34.667979 containerd[1906]: time="2026-03-13T11:42:34.667056630Z" level=info msg="StartContainer for \"13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908\"" Mar 13 11:42:34.667979 containerd[1906]: time="2026-03-13T11:42:34.667699542Z" level=info msg="connecting to shim 13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908" address="unix:///run/containerd/s/4737063f2dc06fceab31fd21d165eda2d57efc8beb4ba17ce1e9389830819726" protocol=ttrpc version=3 Mar 13 11:42:34.691071 systemd[1]: Started cri-containerd-13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908.scope - libcontainer container 13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908. Mar 13 11:42:34.732298 containerd[1906]: time="2026-03-13T11:42:34.732249743Z" level=info msg="StartContainer for \"13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908\" returns successfully" Mar 13 11:42:34.734916 systemd[1]: cri-containerd-13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908.scope: Deactivated successfully. Mar 13 11:42:34.741130 containerd[1906]: time="2026-03-13T11:42:34.741102762Z" level=info msg="received container exit event container_id:\"13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908\" id:\"13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908\" pid:5231 exited_at:{seconds:1773402154 nanos:736745523}" Mar 13 11:42:34.757502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13dfc7667ab86423c1022f8f0155ec4e236aaeccf8986f29fa10231f50bc1908-rootfs.mount: Deactivated successfully. Mar 13 11:42:35.499265 kubelet[3404]: I0313 11:42:35.498855 3404 setters.go:543] "Node became not ready" node="ci-4459.2.101-83511db97f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T11:42:35Z","lastTransitionTime":"2026-03-13T11:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 13 11:42:35.634163 containerd[1906]: time="2026-03-13T11:42:35.633288295Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 11:42:35.652889 containerd[1906]: time="2026-03-13T11:42:35.652847348Z" level=info msg="Container 1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:42:35.671524 containerd[1906]: time="2026-03-13T11:42:35.671421620Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980\"" Mar 13 11:42:35.672220 containerd[1906]: time="2026-03-13T11:42:35.672134999Z" level=info msg="StartContainer for \"1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980\"" Mar 13 11:42:35.673462 containerd[1906]: time="2026-03-13T11:42:35.673441313Z" level=info msg="connecting to shim 1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980" address="unix:///run/containerd/s/4737063f2dc06fceab31fd21d165eda2d57efc8beb4ba17ce1e9389830819726" protocol=ttrpc version=3 Mar 13 11:42:35.693993 systemd[1]: Started cri-containerd-1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980.scope - libcontainer container 1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980. Mar 13 11:42:35.754042 systemd[1]: cri-containerd-1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980.scope: Deactivated successfully. Mar 13 11:42:35.757572 containerd[1906]: time="2026-03-13T11:42:35.757327486Z" level=info msg="received container exit event container_id:\"1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980\" id:\"1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980\" pid:5275 exited_at:{seconds:1773402155 nanos:755249871}" Mar 13 11:42:35.758730 containerd[1906]: time="2026-03-13T11:42:35.758708395Z" level=info msg="StartContainer for \"1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980\" returns successfully" Mar 13 11:42:35.775474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bf6ca498d6a432c96150702d4a063ddd87376d726cf8c04e27e4447b26c0980-rootfs.mount: Deactivated successfully. Mar 13 11:42:36.636689 containerd[1906]: time="2026-03-13T11:42:36.636638471Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 11:42:36.654690 containerd[1906]: time="2026-03-13T11:42:36.652097167Z" level=info msg="Container 529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:42:36.665450 containerd[1906]: time="2026-03-13T11:42:36.665394005Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c\"" Mar 13 11:42:36.667451 containerd[1906]: time="2026-03-13T11:42:36.667247948Z" level=info msg="StartContainer for \"529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c\"" Mar 13 11:42:36.668123 containerd[1906]: time="2026-03-13T11:42:36.668039122Z" level=info msg="connecting to shim 529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c" address="unix:///run/containerd/s/4737063f2dc06fceab31fd21d165eda2d57efc8beb4ba17ce1e9389830819726" protocol=ttrpc version=3 Mar 13 11:42:36.687001 systemd[1]: Started cri-containerd-529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c.scope - libcontainer container 529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c. Mar 13 11:42:36.707072 systemd[1]: cri-containerd-529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c.scope: Deactivated successfully. Mar 13 11:42:36.714538 containerd[1906]: time="2026-03-13T11:42:36.714495333Z" level=info msg="received container exit event container_id:\"529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c\" id:\"529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c\" pid:5315 exited_at:{seconds:1773402156 nanos:709991921}" Mar 13 11:42:36.717022 containerd[1906]: time="2026-03-13T11:42:36.716999541Z" level=info msg="StartContainer for \"529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c\" returns successfully" Mar 13 11:42:36.736233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-529b8eafd7f287fb9e7dcf440caebff41ca2731b659273a688d3cf5ef38b844c-rootfs.mount: Deactivated successfully. Mar 13 11:42:37.377250 kubelet[3404]: E0313 11:42:37.377208 3404 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 11:42:37.641424 containerd[1906]: time="2026-03-13T11:42:37.640904793Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 11:42:37.666139 containerd[1906]: time="2026-03-13T11:42:37.666094966Z" level=info msg="Container 3c437dc96c6bc6d8cb9e91e4e8a5737a24c283881624274667bd1f6f5dfb6d14: CDI devices from CRI Config.CDIDevices: []" Mar 13 11:42:37.680056 containerd[1906]: time="2026-03-13T11:42:37.680016307Z" level=info msg="CreateContainer within sandbox \"1cec0087ac0e35bc06cc026c303817cf396ea32fa7bf5ceafe4462f9f61a0218\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3c437dc96c6bc6d8cb9e91e4e8a5737a24c283881624274667bd1f6f5dfb6d14\"" Mar 13 11:42:37.680882 containerd[1906]: time="2026-03-13T11:42:37.680801634Z" level=info msg="StartContainer for \"3c437dc96c6bc6d8cb9e91e4e8a5737a24c283881624274667bd1f6f5dfb6d14\"" Mar 13 11:42:37.681614 containerd[1906]: time="2026-03-13T11:42:37.681573975Z" level=info msg="connecting to shim 3c437dc96c6bc6d8cb9e91e4e8a5737a24c283881624274667bd1f6f5dfb6d14" address="unix:///run/containerd/s/4737063f2dc06fceab31fd21d165eda2d57efc8beb4ba17ce1e9389830819726" protocol=ttrpc version=3 Mar 13 11:42:37.699002 systemd[1]: Started cri-containerd-3c437dc96c6bc6d8cb9e91e4e8a5737a24c283881624274667bd1f6f5dfb6d14.scope - libcontainer container 3c437dc96c6bc6d8cb9e91e4e8a5737a24c283881624274667bd1f6f5dfb6d14. Mar 13 11:42:37.736238 containerd[1906]: time="2026-03-13T11:42:37.736202412Z" level=info msg="StartContainer for \"3c437dc96c6bc6d8cb9e91e4e8a5737a24c283881624274667bd1f6f5dfb6d14\" returns successfully" Mar 13 11:42:38.061921 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 13 11:42:38.656781 kubelet[3404]: I0313 11:42:38.656219 3404 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wngjj" podStartSLOduration=5.656206441 podStartE2EDuration="5.656206441s" podCreationTimestamp="2026-03-13 11:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 11:42:38.655999306 +0000 UTC m=+146.717009403" watchObservedRunningTime="2026-03-13 11:42:38.656206441 +0000 UTC m=+146.717216538" Mar 13 11:42:40.414999 systemd-networkd[1489]: lxc_health: Link UP Mar 13 11:42:40.427988 systemd-networkd[1489]: lxc_health: Gained carrier Mar 13 11:42:41.830002 systemd-networkd[1489]: lxc_health: Gained IPv6LL Mar 13 11:42:47.389966 sshd[5211]: Connection closed by 10.200.16.10 port 39568 Mar 13 11:42:47.390752 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Mar 13 11:42:47.394754 systemd-logind[1879]: Session 27 logged out. Waiting for processes to exit. Mar 13 11:42:47.395225 systemd[1]: sshd@24-10.200.20.31:22-10.200.16.10:39568.service: Deactivated successfully. Mar 13 11:42:47.397268 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 11:42:47.398481 systemd-logind[1879]: Removed session 27.