Mar 2 13:17:02.209175 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 2 13:17:02.209197 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 2 11:11:01 -00 2026 Mar 2 13:17:02.209205 kernel: KASLR enabled Mar 2 13:17:02.209211 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 2 13:17:02.209219 kernel: printk: bootconsole [pl11] enabled Mar 2 13:17:02.209225 kernel: efi: EFI v2.7 by EDK II Mar 2 13:17:02.209232 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 2 13:17:02.209238 kernel: random: crng init done Mar 2 13:17:02.209244 kernel: ACPI: Early table checksum verification disabled Mar 2 13:17:02.209250 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 2 13:17:02.209257 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209263 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209270 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 2 13:17:02.209277 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209284 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209291 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209297 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209305 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209312 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209318 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 2 13:17:02.209325 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209332 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 2 13:17:02.209338 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 2 13:17:02.209345 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 2 13:17:02.209351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 2 13:17:02.209358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 2 13:17:02.209364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 2 13:17:02.209371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 2 13:17:02.209378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 2 13:17:02.209385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 2 13:17:02.209392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 2 13:17:02.209398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 2 13:17:02.209405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 2 13:17:02.209411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 2 13:17:02.209418 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 2 13:17:02.209424 kernel: Zone ranges: Mar 2 13:17:02.209430 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 2 13:17:02.209437 kernel: DMA32 empty Mar 2 13:17:02.209444 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:17:02.209450 kernel: Movable zone start for each node Mar 2 13:17:02.209461 kernel: Early memory node ranges Mar 2 13:17:02.209468 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 2 13:17:02.209475 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 2 13:17:02.209482 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 2 13:17:02.209489 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 2 13:17:02.209497 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 2 13:17:02.209504 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 2 13:17:02.209511 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:17:02.209518 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 2 13:17:02.209525 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 2 13:17:02.209532 kernel: psci: probing for conduit method from ACPI. Mar 2 13:17:02.209539 kernel: psci: PSCIv1.1 detected in firmware. Mar 2 13:17:02.209545 kernel: psci: Using standard PSCI v0.2 function IDs Mar 2 13:17:02.209552 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 2 13:17:02.209559 kernel: psci: SMC Calling Convention v1.4 Mar 2 13:17:02.209566 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 2 13:17:02.209573 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 2 13:17:02.209581 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 2 13:17:02.209588 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 2 13:17:02.209595 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 2 13:17:02.209602 kernel: Detected PIPT I-cache on CPU0 Mar 2 13:17:02.209609 kernel: CPU features: detected: GIC system register CPU interface Mar 2 13:17:02.209616 kernel: CPU features: detected: Hardware dirty bit management Mar 2 13:17:02.211665 kernel: CPU features: detected: Spectre-BHB Mar 2 13:17:02.211674 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 2 13:17:02.211681 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 2 13:17:02.211688 kernel: CPU features: detected: ARM erratum 1418040 Mar 2 13:17:02.211696 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 2 13:17:02.211707 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 2 13:17:02.211715 kernel: alternatives: applying boot alternatives Mar 2 13:17:02.211723 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:17:02.211731 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:17:02.211738 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:17:02.211745 kernel: Fallback order for Node 0: 0 Mar 2 13:17:02.211752 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 2 13:17:02.211759 kernel: Policy zone: Normal Mar 2 13:17:02.211766 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:17:02.211773 kernel: software IO TLB: area num 2. Mar 2 13:17:02.211780 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 2 13:17:02.211789 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 2 13:17:02.211796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 2 13:17:02.211803 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:17:02.211811 kernel: rcu: RCU event tracing is enabled. Mar 2 13:17:02.211818 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 2 13:17:02.211825 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:17:02.211832 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:17:02.211839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:17:02.211846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 2 13:17:02.211853 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 2 13:17:02.211859 kernel: GICv3: 960 SPIs implemented Mar 2 13:17:02.211868 kernel: GICv3: 0 Extended SPIs implemented Mar 2 13:17:02.211875 kernel: Root IRQ handler: gic_handle_irq Mar 2 13:17:02.211882 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 2 13:17:02.211889 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 2 13:17:02.211896 kernel: ITS: No ITS available, not enabling LPIs Mar 2 13:17:02.211903 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:17:02.211910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:17:02.211917 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 2 13:17:02.211924 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 2 13:17:02.211931 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 2 13:17:02.211938 kernel: Console: colour dummy device 80x25 Mar 2 13:17:02.211948 kernel: printk: console [tty1] enabled Mar 2 13:17:02.211955 kernel: ACPI: Core revision 20230628 Mar 2 13:17:02.211962 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 2 13:17:02.211970 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:17:02.211977 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:17:02.211984 kernel: landlock: Up and running. Mar 2 13:17:02.211991 kernel: SELinux: Initializing. Mar 2 13:17:02.211998 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212005 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212014 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:17:02.212021 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:17:02.212028 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 2 13:17:02.212036 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 2 13:17:02.212043 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 2 13:17:02.212050 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:17:02.212057 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:17:02.212064 kernel: Remapping and enabling EFI services. Mar 2 13:17:02.212078 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:17:02.212085 kernel: Detected PIPT I-cache on CPU1 Mar 2 13:17:02.212093 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 2 13:17:02.212101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:17:02.212110 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 2 13:17:02.212117 kernel: smp: Brought up 1 node, 2 CPUs Mar 2 13:17:02.212125 kernel: SMP: Total of 2 processors activated. Mar 2 13:17:02.212132 kernel: CPU features: detected: 32-bit EL0 Support Mar 2 13:17:02.212140 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 2 13:17:02.212149 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 2 13:17:02.212157 kernel: CPU features: detected: CRC32 instructions Mar 2 13:17:02.212164 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 2 13:17:02.212172 kernel: CPU features: detected: LSE atomic instructions Mar 2 13:17:02.212179 kernel: CPU features: detected: Privileged Access Never Mar 2 13:17:02.212187 kernel: CPU: All CPU(s) started at EL1 Mar 2 13:17:02.212194 kernel: alternatives: applying system-wide alternatives Mar 2 13:17:02.212202 kernel: devtmpfs: initialized Mar 2 13:17:02.212209 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:17:02.212218 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 2 13:17:02.212226 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:17:02.212233 kernel: SMBIOS 3.1.0 present. Mar 2 13:17:02.212241 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 2 13:17:02.212249 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:17:02.212256 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 2 13:17:02.212264 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 2 13:17:02.212272 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 2 13:17:02.212279 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:17:02.212288 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 2 13:17:02.212296 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:17:02.212303 kernel: cpuidle: using governor menu Mar 2 13:17:02.212311 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 2 13:17:02.212319 kernel: ASID allocator initialised with 32768 entries Mar 2 13:17:02.212326 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:17:02.212334 kernel: Serial: AMBA PL011 UART driver Mar 2 13:17:02.212341 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 2 13:17:02.212349 kernel: Modules: 0 pages in range for non-PLT usage Mar 2 13:17:02.212358 kernel: Modules: 509008 pages in range for PLT usage Mar 2 13:17:02.212365 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:17:02.212373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:17:02.212380 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 2 13:17:02.212388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 2 13:17:02.212395 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:17:02.212403 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:17:02.212410 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 2 13:17:02.212418 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 2 13:17:02.212427 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:17:02.212435 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:17:02.212442 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:17:02.212450 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:17:02.212457 kernel: ACPI: Interpreter enabled Mar 2 13:17:02.212465 kernel: ACPI: Using GIC for interrupt routing Mar 2 13:17:02.212472 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 2 13:17:02.212480 kernel: printk: console [ttyAMA0] enabled Mar 2 13:17:02.212487 kernel: printk: bootconsole [pl11] disabled Mar 2 13:17:02.212496 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 2 13:17:02.212504 kernel: iommu: Default domain type: Translated Mar 2 13:17:02.212512 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 2 13:17:02.212519 kernel: efivars: Registered efivars operations Mar 2 13:17:02.212526 kernel: vgaarb: loaded Mar 2 13:17:02.212534 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 2 13:17:02.212541 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:17:02.212549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:17:02.212556 kernel: pnp: PnP ACPI init Mar 2 13:17:02.212565 kernel: pnp: PnP ACPI: found 0 devices Mar 2 13:17:02.212573 kernel: NET: Registered PF_INET protocol family Mar 2 13:17:02.212581 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:17:02.212588 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:17:02.212596 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:17:02.212604 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:17:02.212611 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:17:02.212632 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:17:02.212641 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212650 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212658 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:17:02.212665 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:17:02.212673 kernel: kvm [1]: HYP mode not available Mar 2 13:17:02.212680 kernel: Initialise system trusted keyrings Mar 2 13:17:02.212688 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:17:02.212695 kernel: Key type asymmetric registered Mar 2 13:17:02.212702 kernel: Asymmetric key parser 'x509' registered Mar 2 13:17:02.212710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:17:02.212719 kernel: io scheduler mq-deadline registered Mar 2 13:17:02.212726 kernel: io scheduler kyber registered Mar 2 13:17:02.212734 kernel: io scheduler bfq registered Mar 2 13:17:02.212741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:17:02.212749 kernel: thunder_xcv, ver 1.0 Mar 2 13:17:02.212756 kernel: thunder_bgx, ver 1.0 Mar 2 13:17:02.212764 kernel: nicpf, ver 1.0 Mar 2 13:17:02.212771 kernel: nicvf, ver 1.0 Mar 2 13:17:02.212905 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 2 13:17:02.212983 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-02T13:17:01 UTC (1772457421) Mar 2 13:17:02.212994 kernel: efifb: probing for efifb Mar 2 13:17:02.213002 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 2 13:17:02.213009 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 2 13:17:02.213017 kernel: efifb: scrolling: redraw Mar 2 13:17:02.213024 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 13:17:02.213032 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:17:02.213039 kernel: fb0: EFI VGA frame buffer device Mar 2 13:17:02.213049 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 2 13:17:02.213056 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 2 13:17:02.213064 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 2 13:17:02.213072 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 2 13:17:02.213079 kernel: watchdog: Hard watchdog permanently disabled Mar 2 13:17:02.213087 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:17:02.213094 kernel: Segment Routing with IPv6 Mar 2 13:17:02.213101 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:17:02.213109 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:17:02.213118 kernel: Key type dns_resolver registered Mar 2 13:17:02.213125 kernel: registered taskstats version 1 Mar 2 13:17:02.213133 kernel: Loading compiled-in X.509 certificates Mar 2 13:17:02.213140 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 888055ac257926b028c9aac8084c1e2b1bcee773' Mar 2 13:17:02.213148 kernel: Key type .fscrypt registered Mar 2 13:17:02.213155 kernel: Key type fscrypt-provisioning registered Mar 2 13:17:02.213163 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:17:02.213170 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:17:02.213178 kernel: ima: No architecture policies found Mar 2 13:17:02.213187 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 2 13:17:02.213194 kernel: clk: Disabling unused clocks Mar 2 13:17:02.213202 kernel: Freeing unused kernel memory: 39424K Mar 2 13:17:02.213209 kernel: Run /init as init process Mar 2 13:17:02.213216 kernel: with arguments: Mar 2 13:17:02.213224 kernel: /init Mar 2 13:17:02.213231 kernel: with environment: Mar 2 13:17:02.213238 kernel: HOME=/ Mar 2 13:17:02.213246 kernel: TERM=linux Mar 2 13:17:02.213255 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:17:02.213267 systemd[1]: Detected virtualization microsoft. Mar 2 13:17:02.213275 systemd[1]: Detected architecture arm64. Mar 2 13:17:02.213283 systemd[1]: Running in initrd. Mar 2 13:17:02.213290 systemd[1]: No hostname configured, using default hostname. Mar 2 13:17:02.213298 systemd[1]: Hostname set to . Mar 2 13:17:02.213307 systemd[1]: Initializing machine ID from random generator. Mar 2 13:17:02.213317 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:17:02.213325 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:17:02.213334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:17:02.213342 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:17:02.213350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:17:02.213359 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:17:02.213367 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:17:02.213377 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:17:02.213386 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:17:02.213395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:17:02.213403 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:17:02.213411 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:17:02.213419 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:17:02.213427 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:17:02.213435 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:17:02.213443 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:17:02.213452 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:17:02.213461 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:17:02.213469 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:17:02.213477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:17:02.213485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:17:02.213493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:17:02.213501 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:17:02.213510 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:17:02.213520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:17:02.213528 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:17:02.213536 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:17:02.213544 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:17:02.213569 systemd-journald[217]: Collecting audit messages is disabled. Mar 2 13:17:02.213591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:17:02.213600 systemd-journald[217]: Journal started Mar 2 13:17:02.213634 systemd-journald[217]: Runtime Journal (/run/log/journal/86fcc8562cab4420b6233fadafa8a0c4) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:17:02.219208 systemd-modules-load[218]: Inserted module 'overlay' Mar 2 13:17:02.226908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:02.251573 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:17:02.251640 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:17:02.259449 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:17:02.271378 kernel: Bridge firewalling registered Mar 2 13:17:02.264569 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 2 13:17:02.266887 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:17:02.281354 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:17:02.290387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:17:02.298577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:02.315917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:17:02.323777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:17:02.350745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:17:02.362830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:17:02.373340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:02.383481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:17:02.388555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:17:02.405156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:17:02.425959 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:17:02.433087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:17:02.450405 dracut-cmdline[251]: dracut-dracut-053 Mar 2 13:17:02.461684 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:17:02.488204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:17:02.506568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:17:02.517213 systemd-resolved[253]: Positive Trust Anchors: Mar 2 13:17:02.517222 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:17:02.517253 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:17:02.519463 systemd-resolved[253]: Defaulting to hostname 'linux'. Mar 2 13:17:02.524733 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:17:02.536785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:17:02.629645 kernel: SCSI subsystem initialized Mar 2 13:17:02.638632 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:17:02.646647 kernel: iscsi: registered transport (tcp) Mar 2 13:17:02.663696 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:17:02.663757 kernel: QLogic iSCSI HBA Driver Mar 2 13:17:02.698303 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:17:02.711040 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:17:02.739432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:17:02.739467 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:17:02.744755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:17:02.795643 kernel: raid6: neonx8 gen() 15801 MB/s Mar 2 13:17:02.809627 kernel: raid6: neonx4 gen() 14945 MB/s Mar 2 13:17:02.828625 kernel: raid6: neonx2 gen() 13262 MB/s Mar 2 13:17:02.848626 kernel: raid6: neonx1 gen() 10485 MB/s Mar 2 13:17:02.867624 kernel: raid6: int64x8 gen() 6977 MB/s Mar 2 13:17:02.886624 kernel: raid6: int64x4 gen() 7365 MB/s Mar 2 13:17:02.906628 kernel: raid6: int64x2 gen() 6146 MB/s Mar 2 13:17:02.928799 kernel: raid6: int64x1 gen() 5072 MB/s Mar 2 13:17:02.928809 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Mar 2 13:17:02.951434 kernel: raid6: .... xor() 12045 MB/s, rmw enabled Mar 2 13:17:02.951444 kernel: raid6: using neon recovery algorithm Mar 2 13:17:02.962433 kernel: xor: measuring software checksum speed Mar 2 13:17:02.962456 kernel: 8regs : 19754 MB/sec Mar 2 13:17:02.965372 kernel: 32regs : 19622 MB/sec Mar 2 13:17:02.971560 kernel: arm64_neon : 26399 MB/sec Mar 2 13:17:02.971572 kernel: xor: using function: arm64_neon (26399 MB/sec) Mar 2 13:17:03.021634 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:17:03.031406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:17:03.044755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:17:03.065722 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 2 13:17:03.070295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:17:03.085748 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:17:03.107424 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Mar 2 13:17:03.135812 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:17:03.148895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:17:03.188454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:17:03.209030 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:17:03.235787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:17:03.244852 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:17:03.261037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:17:03.272786 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:17:03.296777 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:17:03.312649 kernel: hv_vmbus: Vmbus version:5.3 Mar 2 13:17:03.313997 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:17:03.318315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:03.348580 kernel: hv_vmbus: registering driver hv_netvsc Mar 2 13:17:03.348608 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 2 13:17:03.338899 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:17:03.373274 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 2 13:17:03.373300 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 2 13:17:03.373311 kernel: hv_vmbus: registering driver hv_storvsc Mar 2 13:17:03.356604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:17:03.384711 kernel: scsi host1: storvsc_host_t Mar 2 13:17:03.356795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:03.407073 kernel: scsi host0: storvsc_host_t Mar 2 13:17:03.407233 kernel: hv_vmbus: registering driver hid_hyperv Mar 2 13:17:03.407254 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 2 13:17:03.407265 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 2 13:17:03.388502 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:03.422280 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 2 13:17:03.422301 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: VF slot 1 added Mar 2 13:17:03.437555 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 2 13:17:03.437763 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 2 13:17:03.441017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:03.448240 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:17:03.475224 kernel: PTP clock support registered Mar 2 13:17:03.474822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:17:03.494925 kernel: hv_utils: Registering HyperV Utility Driver Mar 2 13:17:03.494955 kernel: hv_vmbus: registering driver hv_utils Mar 2 13:17:03.474965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:03.946733 kernel: hv_utils: Shutdown IC version 3.2 Mar 2 13:17:03.946756 kernel: hv_utils: Heartbeat IC version 3.0 Mar 2 13:17:03.946766 kernel: hv_utils: TimeSync IC version 4.0 Mar 2 13:17:03.946776 kernel: hv_vmbus: registering driver hv_pci Mar 2 13:17:03.504984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:03.945337 systemd-resolved[253]: Clock change detected. Flushing caches. Mar 2 13:17:03.972137 kernel: hv_pci 74b4f976-b7f8-47d4-9ccc-da24345b6e55: PCI VMBus probing: Using version 0x10004 Mar 2 13:17:03.972349 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 2 13:17:03.975808 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:17:03.979232 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 2 13:17:03.991475 kernel: hv_pci 74b4f976-b7f8-47d4-9ccc-da24345b6e55: PCI host bridge to bus b7f8:00 Mar 2 13:17:03.991696 kernel: pci_bus b7f8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 2 13:17:03.986974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:04.022757 kernel: pci_bus b7f8:00: No busn resource found for root bus, will use [bus 00-ff] Mar 2 13:17:04.022954 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 2 13:17:04.023081 kernel: pci b7f8:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 2 13:17:04.023106 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 2 13:17:04.023212 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 2 13:17:04.023355 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 2 13:17:04.023994 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 2 13:17:04.040923 kernel: pci b7f8:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:17:04.030600 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:17:04.057679 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:04.057700 kernel: pci b7f8:00:02.0: enabling Extended Tags Mar 2 13:17:04.057728 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 2 13:17:04.062344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#27 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:17:04.086277 kernel: pci b7f8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b7f8:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 2 13:17:04.100122 kernel: pci_bus b7f8:00: busn_res: [bus 00-ff] end is updated to 00 Mar 2 13:17:04.100374 kernel: pci b7f8:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:17:04.100595 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:04.138245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:17:04.166994 kernel: mlx5_core b7f8:00:02.0: enabling device (0000 -> 0002) Mar 2 13:17:04.173231 kernel: mlx5_core b7f8:00:02.0: firmware version: 16.30.5026 Mar 2 13:17:04.365238 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: VF registering: eth1 Mar 2 13:17:04.365435 kernel: mlx5_core b7f8:00:02.0 eth1: joined to eth0 Mar 2 13:17:04.374390 kernel: mlx5_core b7f8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 2 13:17:04.384251 kernel: mlx5_core b7f8:00:02.0 enP47096s1: renamed from eth1 Mar 2 13:17:04.642561 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 2 13:17:04.671248 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Mar 2 13:17:04.686827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:17:04.705080 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 2 13:17:04.758234 kernel: BTRFS: device fsid 0d0ab669-47ba-4267-b368-82e952673c8e devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (488) Mar 2 13:17:04.772290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 2 13:17:04.777829 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 2 13:17:04.802469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:17:04.826573 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:04.834229 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:04.843228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:05.846782 disk-uuid[613]: The operation has completed successfully. Mar 2 13:17:05.852766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:05.923067 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:17:05.924265 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:17:05.956346 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:17:05.966321 sh[726]: Success Mar 2 13:17:05.997408 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 2 13:17:06.276518 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:17:06.284358 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:17:06.291245 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:17:06.324014 kernel: BTRFS info (device dm-0): first mount of filesystem 0d0ab669-47ba-4267-b368-82e952673c8e Mar 2 13:17:06.324066 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:06.329578 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:17:06.333759 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:17:06.337272 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:17:06.764793 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:17:06.768951 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:17:06.786461 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:17:06.793407 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:17:06.832394 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:06.832449 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:06.836068 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:17:06.882237 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:17:06.886171 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:17:06.907338 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:06.907557 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:17:06.920202 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:17:06.928471 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:17:06.938036 systemd-networkd[907]: lo: Link UP Mar 2 13:17:06.938040 systemd-networkd[907]: lo: Gained carrier Mar 2 13:17:06.940164 systemd-networkd[907]: Enumeration completed Mar 2 13:17:06.941418 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:17:06.943610 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:17:06.943613 systemd-networkd[907]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:17:06.961045 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:17:06.975925 systemd[1]: Reached target network.target - Network. Mar 2 13:17:07.037233 kernel: mlx5_core b7f8:00:02.0 enP47096s1: Link up Mar 2 13:17:07.073250 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: Data path switched to VF: enP47096s1 Mar 2 13:17:07.074135 systemd-networkd[907]: enP47096s1: Link UP Mar 2 13:17:07.074243 systemd-networkd[907]: eth0: Link UP Mar 2 13:17:07.074346 systemd-networkd[907]: eth0: Gained carrier Mar 2 13:17:07.074355 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:17:07.094800 systemd-networkd[907]: enP47096s1: Gained carrier Mar 2 13:17:07.107260 systemd-networkd[907]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:17:07.932954 ignition[911]: Ignition 2.19.0 Mar 2 13:17:07.932971 ignition[911]: Stage: fetch-offline Mar 2 13:17:07.937697 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:17:07.933008 ignition[911]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:07.933017 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:07.933129 ignition[911]: parsed url from cmdline: "" Mar 2 13:17:07.933133 ignition[911]: no config URL provided Mar 2 13:17:07.933137 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:17:07.933147 ignition[911]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:17:07.961494 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 2 13:17:07.933153 ignition[911]: failed to fetch config: resource requires networking Mar 2 13:17:07.935941 ignition[911]: Ignition finished successfully Mar 2 13:17:07.981783 ignition[919]: Ignition 2.19.0 Mar 2 13:17:07.981790 ignition[919]: Stage: fetch Mar 2 13:17:07.982010 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:07.982023 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:07.982137 ignition[919]: parsed url from cmdline: "" Mar 2 13:17:07.982141 ignition[919]: no config URL provided Mar 2 13:17:07.982146 ignition[919]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:17:07.982153 ignition[919]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:17:07.982178 ignition[919]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 2 13:17:08.118451 ignition[919]: GET result: OK Mar 2 13:17:08.118521 ignition[919]: config has been read from IMDS userdata Mar 2 13:17:08.118566 ignition[919]: parsing config with SHA512: c37460fba49cf812d4d520978077b888417b24eecf33e1f824e963795be33d87d13ca7cf42ee4974547eb053ef3cec5e854d2edabe687c174e999473bbcf3023 Mar 2 13:17:08.122585 unknown[919]: fetched base config from "system" Mar 2 13:17:08.123045 ignition[919]: fetch: fetch complete Mar 2 13:17:08.122593 unknown[919]: fetched base config from "system" Mar 2 13:17:08.123052 ignition[919]: fetch: fetch passed Mar 2 13:17:08.122598 unknown[919]: fetched user config from "azure" Mar 2 13:17:08.123118 ignition[919]: Ignition finished successfully Mar 2 13:17:08.131015 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 2 13:17:08.153500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:17:08.175679 ignition[925]: Ignition 2.19.0 Mar 2 13:17:08.179954 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:17:08.175690 ignition[925]: Stage: kargs Mar 2 13:17:08.175859 ignition[925]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:08.175868 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:08.176906 ignition[925]: kargs: kargs passed Mar 2 13:17:08.198378 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:17:08.176961 ignition[925]: Ignition finished successfully Mar 2 13:17:08.214407 ignition[931]: Ignition 2.19.0 Mar 2 13:17:08.220263 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:17:08.214415 ignition[931]: Stage: disks Mar 2 13:17:08.226026 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:17:08.214624 ignition[931]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:08.235226 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:17:08.214635 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:08.242926 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:17:08.215686 ignition[931]: disks: disks passed Mar 2 13:17:08.251923 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:17:08.215734 ignition[931]: Ignition finished successfully Mar 2 13:17:08.260064 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:17:08.281489 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:17:08.286058 systemd-networkd[907]: eth0: Gained IPv6LL Mar 2 13:17:08.364669 systemd-fsck[940]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 2 13:17:08.375904 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:17:08.393476 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:17:08.454270 kernel: EXT4-fs (sda9): mounted filesystem a5f5c21d-8a27-4a94-875f-5735c39d000b r/w with ordered data mode. Quota mode: none. Mar 2 13:17:08.454838 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:17:08.458974 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:17:08.501313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:17:08.520229 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (951) Mar 2 13:17:08.530788 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:08.530801 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:08.535196 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:17:08.542588 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:17:08.551233 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:17:08.551410 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 2 13:17:08.562406 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:17:08.572231 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:17:08.578900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:17:08.586521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:17:08.604468 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:17:09.214662 coreos-metadata[968]: Mar 02 13:17:09.214 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:17:09.223708 coreos-metadata[968]: Mar 02 13:17:09.223 INFO Fetch successful Mar 2 13:17:09.223708 coreos-metadata[968]: Mar 02 13:17:09.223 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:17:09.237855 coreos-metadata[968]: Mar 02 13:17:09.235 INFO Fetch successful Mar 2 13:17:09.254265 coreos-metadata[968]: Mar 02 13:17:09.254 INFO wrote hostname ci-4081.3.101-5317e0e64c to /sysroot/etc/hostname Mar 2 13:17:09.262032 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:17:09.432948 initrd-setup-root[980]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:17:09.477193 initrd-setup-root[987]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:17:09.485797 initrd-setup-root[994]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:17:09.492440 initrd-setup-root[1001]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:17:10.747605 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:17:10.762714 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:17:10.772395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:17:10.789767 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:10.784876 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:17:10.814924 ignition[1069]: INFO : Ignition 2.19.0 Mar 2 13:17:10.814924 ignition[1069]: INFO : Stage: mount Mar 2 13:17:10.823596 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:10.823596 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:10.823596 ignition[1069]: INFO : mount: mount passed Mar 2 13:17:10.823596 ignition[1069]: INFO : Ignition finished successfully Mar 2 13:17:10.823302 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:17:10.847891 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:17:10.857242 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:17:10.874367 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:17:10.901390 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1080) Mar 2 13:17:10.901435 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:10.906614 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:10.910191 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:17:10.917228 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:17:10.919551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:17:10.946339 ignition[1098]: INFO : Ignition 2.19.0 Mar 2 13:17:10.946339 ignition[1098]: INFO : Stage: files Mar 2 13:17:10.953072 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:10.953072 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:10.953072 ignition[1098]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:17:10.953072 ignition[1098]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:17:10.953072 ignition[1098]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:17:10.998034 ignition[1098]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:17:11.004128 ignition[1098]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:17:11.004128 ignition[1098]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:17:11.000732 unknown[1098]: wrote ssh authorized keys file for user: core Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 2 13:17:11.068393 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 13:17:11.187185 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:17:11.187185 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 2 13:17:11.608708 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 13:17:11.867526 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.867526 ignition[1098]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 2 13:17:11.882822 ignition[1098]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: files passed Mar 2 13:17:11.893526 ignition[1098]: INFO : Ignition finished successfully Mar 2 13:17:11.904269 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:17:11.938637 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:17:11.946433 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:17:12.033075 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:17:12.033075 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:17:11.962432 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:17:12.052087 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:17:11.962528 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:17:11.972614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:17:11.982045 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:17:12.009773 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:17:12.050712 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:17:12.050826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:17:12.058974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:17:12.071207 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:17:12.080951 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:17:12.101474 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:17:12.140054 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:17:12.154467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:17:12.172059 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:17:12.173251 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:17:12.182146 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:17:12.192926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:17:12.203302 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:17:12.212056 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:17:12.212127 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:17:12.226466 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:17:12.236132 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:17:12.244716 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:17:12.253400 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:17:12.263380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:17:12.273202 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:17:12.282727 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:17:12.293045 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:17:12.303050 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:17:12.312117 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:17:12.320272 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:17:12.320344 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:17:12.333023 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:17:12.338019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:17:12.348125 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:17:12.352763 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:17:12.358818 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:17:12.358878 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:17:12.374424 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:17:12.374485 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:17:12.385847 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:17:12.385946 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:17:12.394733 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 2 13:17:12.394791 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:17:12.419362 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:17:12.463429 ignition[1151]: INFO : Ignition 2.19.0 Mar 2 13:17:12.463429 ignition[1151]: INFO : Stage: umount Mar 2 13:17:12.463429 ignition[1151]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:12.463429 ignition[1151]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:12.434965 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:17:12.497759 ignition[1151]: INFO : umount: umount passed Mar 2 13:17:12.497759 ignition[1151]: INFO : Ignition finished successfully Mar 2 13:17:12.444961 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:17:12.445033 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:17:12.457270 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:17:12.457343 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:17:12.469610 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:17:12.470158 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:17:12.472322 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:17:12.479031 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:17:12.479177 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:17:12.488274 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:17:12.488335 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:17:12.502100 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 2 13:17:12.502154 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 2 13:17:12.509753 systemd[1]: Stopped target network.target - Network. Mar 2 13:17:12.519985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:17:12.520067 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:17:12.529817 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:17:12.539661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:17:12.549243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:17:12.557828 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:17:12.567896 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:17:12.576114 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:17:12.576164 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:17:12.584900 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:17:12.584977 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:17:12.593764 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:17:12.593811 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:17:12.602142 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:17:12.602177 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:17:12.611915 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:17:12.621365 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:17:12.644256 systemd-networkd[907]: eth0: DHCPv6 lease lost Mar 2 13:17:12.645789 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:17:12.645940 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:17:12.657012 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:17:12.657120 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:17:12.668047 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:17:12.668098 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:17:12.690455 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:17:12.698365 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:17:12.844521 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: Data path switched from VF: enP47096s1 Mar 2 13:17:12.698431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:17:12.710850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:17:12.710909 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:17:12.718925 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:17:12.718975 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:17:12.728179 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:17:12.728232 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:17:12.737919 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:17:12.753396 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:17:12.753493 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:17:12.777640 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:17:12.777763 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:17:12.788024 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:17:12.788128 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:17:12.796319 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:17:12.796378 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:17:12.805392 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:17:12.805448 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:17:12.819779 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:17:12.819834 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:17:12.844571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:17:12.844630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:12.855566 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:17:12.855612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:17:12.874392 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:17:12.888280 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:17:12.888347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:17:12.897155 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:17:12.897197 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:17:12.908327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:17:12.908377 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:17:12.919639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:17:12.919680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:12.929477 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:17:12.929602 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:17:12.938779 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:17:12.940938 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:17:12.947771 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:17:12.972483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:17:13.237671 systemd[1]: Switching root. Mar 2 13:17:13.265715 systemd-journald[217]: Journal stopped Mar 2 13:17:02.209175 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 2 13:17:02.209197 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 2 11:11:01 -00 2026 Mar 2 13:17:02.209205 kernel: KASLR enabled Mar 2 13:17:02.209211 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 2 13:17:02.209219 kernel: printk: bootconsole [pl11] enabled Mar 2 13:17:02.209225 kernel: efi: EFI v2.7 by EDK II Mar 2 13:17:02.209232 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 2 13:17:02.209238 kernel: random: crng init done Mar 2 13:17:02.209244 kernel: ACPI: Early table checksum verification disabled Mar 2 13:17:02.209250 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 2 13:17:02.209257 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209263 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209270 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 2 13:17:02.209277 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209284 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209291 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209297 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209305 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209312 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209318 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 2 13:17:02.209325 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:17:02.209332 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 2 13:17:02.209338 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 2 13:17:02.209345 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 2 13:17:02.209351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 2 13:17:02.209358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 2 13:17:02.209364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 2 13:17:02.209371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 2 13:17:02.209378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 2 13:17:02.209385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 2 13:17:02.209392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 2 13:17:02.209398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 2 13:17:02.209405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 2 13:17:02.209411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 2 13:17:02.209418 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 2 13:17:02.209424 kernel: Zone ranges: Mar 2 13:17:02.209430 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 2 13:17:02.209437 kernel: DMA32 empty Mar 2 13:17:02.209444 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:17:02.209450 kernel: Movable zone start for each node Mar 2 13:17:02.209461 kernel: Early memory node ranges Mar 2 13:17:02.209468 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 2 13:17:02.209475 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 2 13:17:02.209482 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 2 13:17:02.209489 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 2 13:17:02.209497 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 2 13:17:02.209504 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 2 13:17:02.209511 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:17:02.209518 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 2 13:17:02.209525 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 2 13:17:02.209532 kernel: psci: probing for conduit method from ACPI. Mar 2 13:17:02.209539 kernel: psci: PSCIv1.1 detected in firmware. Mar 2 13:17:02.209545 kernel: psci: Using standard PSCI v0.2 function IDs Mar 2 13:17:02.209552 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 2 13:17:02.209559 kernel: psci: SMC Calling Convention v1.4 Mar 2 13:17:02.209566 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 2 13:17:02.209573 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 2 13:17:02.209581 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 2 13:17:02.209588 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 2 13:17:02.209595 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 2 13:17:02.209602 kernel: Detected PIPT I-cache on CPU0 Mar 2 13:17:02.209609 kernel: CPU features: detected: GIC system register CPU interface Mar 2 13:17:02.209616 kernel: CPU features: detected: Hardware dirty bit management Mar 2 13:17:02.211665 kernel: CPU features: detected: Spectre-BHB Mar 2 13:17:02.211674 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 2 13:17:02.211681 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 2 13:17:02.211688 kernel: CPU features: detected: ARM erratum 1418040 Mar 2 13:17:02.211696 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 2 13:17:02.211707 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 2 13:17:02.211715 kernel: alternatives: applying boot alternatives Mar 2 13:17:02.211723 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:17:02.211731 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:17:02.211738 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:17:02.211745 kernel: Fallback order for Node 0: 0 Mar 2 13:17:02.211752 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 2 13:17:02.211759 kernel: Policy zone: Normal Mar 2 13:17:02.211766 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:17:02.211773 kernel: software IO TLB: area num 2. Mar 2 13:17:02.211780 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 2 13:17:02.211789 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 2 13:17:02.211796 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 2 13:17:02.211803 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:17:02.211811 kernel: rcu: RCU event tracing is enabled. Mar 2 13:17:02.211818 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 2 13:17:02.211825 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:17:02.211832 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:17:02.211839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:17:02.211846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 2 13:17:02.211853 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 2 13:17:02.211859 kernel: GICv3: 960 SPIs implemented Mar 2 13:17:02.211868 kernel: GICv3: 0 Extended SPIs implemented Mar 2 13:17:02.211875 kernel: Root IRQ handler: gic_handle_irq Mar 2 13:17:02.211882 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 2 13:17:02.211889 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 2 13:17:02.211896 kernel: ITS: No ITS available, not enabling LPIs Mar 2 13:17:02.211903 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:17:02.211910 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:17:02.211917 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 2 13:17:02.211924 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 2 13:17:02.211931 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 2 13:17:02.211938 kernel: Console: colour dummy device 80x25 Mar 2 13:17:02.211948 kernel: printk: console [tty1] enabled Mar 2 13:17:02.211955 kernel: ACPI: Core revision 20230628 Mar 2 13:17:02.211962 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 2 13:17:02.211970 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:17:02.211977 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:17:02.211984 kernel: landlock: Up and running. Mar 2 13:17:02.211991 kernel: SELinux: Initializing. Mar 2 13:17:02.211998 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212005 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212014 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:17:02.212021 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:17:02.212028 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 2 13:17:02.212036 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 2 13:17:02.212043 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 2 13:17:02.212050 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:17:02.212057 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:17:02.212064 kernel: Remapping and enabling EFI services. Mar 2 13:17:02.212078 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:17:02.212085 kernel: Detected PIPT I-cache on CPU1 Mar 2 13:17:02.212093 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 2 13:17:02.212101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:17:02.212110 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 2 13:17:02.212117 kernel: smp: Brought up 1 node, 2 CPUs Mar 2 13:17:02.212125 kernel: SMP: Total of 2 processors activated. Mar 2 13:17:02.212132 kernel: CPU features: detected: 32-bit EL0 Support Mar 2 13:17:02.212140 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 2 13:17:02.212149 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 2 13:17:02.212157 kernel: CPU features: detected: CRC32 instructions Mar 2 13:17:02.212164 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 2 13:17:02.212172 kernel: CPU features: detected: LSE atomic instructions Mar 2 13:17:02.212179 kernel: CPU features: detected: Privileged Access Never Mar 2 13:17:02.212187 kernel: CPU: All CPU(s) started at EL1 Mar 2 13:17:02.212194 kernel: alternatives: applying system-wide alternatives Mar 2 13:17:02.212202 kernel: devtmpfs: initialized Mar 2 13:17:02.212209 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:17:02.212218 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 2 13:17:02.212226 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:17:02.212233 kernel: SMBIOS 3.1.0 present. Mar 2 13:17:02.212241 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 2 13:17:02.212249 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:17:02.212256 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 2 13:17:02.212264 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 2 13:17:02.212272 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 2 13:17:02.212279 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:17:02.212288 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 2 13:17:02.212296 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:17:02.212303 kernel: cpuidle: using governor menu Mar 2 13:17:02.212311 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 2 13:17:02.212319 kernel: ASID allocator initialised with 32768 entries Mar 2 13:17:02.212326 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:17:02.212334 kernel: Serial: AMBA PL011 UART driver Mar 2 13:17:02.212341 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 2 13:17:02.212349 kernel: Modules: 0 pages in range for non-PLT usage Mar 2 13:17:02.212358 kernel: Modules: 509008 pages in range for PLT usage Mar 2 13:17:02.212365 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:17:02.212373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:17:02.212380 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 2 13:17:02.212388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 2 13:17:02.212395 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:17:02.212403 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:17:02.212410 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 2 13:17:02.212418 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 2 13:17:02.212427 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:17:02.212435 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:17:02.212442 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:17:02.212450 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:17:02.212457 kernel: ACPI: Interpreter enabled Mar 2 13:17:02.212465 kernel: ACPI: Using GIC for interrupt routing Mar 2 13:17:02.212472 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 2 13:17:02.212480 kernel: printk: console [ttyAMA0] enabled Mar 2 13:17:02.212487 kernel: printk: bootconsole [pl11] disabled Mar 2 13:17:02.212496 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 2 13:17:02.212504 kernel: iommu: Default domain type: Translated Mar 2 13:17:02.212512 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 2 13:17:02.212519 kernel: efivars: Registered efivars operations Mar 2 13:17:02.212526 kernel: vgaarb: loaded Mar 2 13:17:02.212534 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 2 13:17:02.212541 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:17:02.212549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:17:02.212556 kernel: pnp: PnP ACPI init Mar 2 13:17:02.212565 kernel: pnp: PnP ACPI: found 0 devices Mar 2 13:17:02.212573 kernel: NET: Registered PF_INET protocol family Mar 2 13:17:02.212581 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:17:02.212588 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:17:02.212596 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:17:02.212604 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:17:02.212611 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:17:02.212632 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:17:02.212641 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212650 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:17:02.212658 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:17:02.212665 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:17:02.212673 kernel: kvm [1]: HYP mode not available Mar 2 13:17:02.212680 kernel: Initialise system trusted keyrings Mar 2 13:17:02.212688 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:17:02.212695 kernel: Key type asymmetric registered Mar 2 13:17:02.212702 kernel: Asymmetric key parser 'x509' registered Mar 2 13:17:02.212710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:17:02.212719 kernel: io scheduler mq-deadline registered Mar 2 13:17:02.212726 kernel: io scheduler kyber registered Mar 2 13:17:02.212734 kernel: io scheduler bfq registered Mar 2 13:17:02.212741 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:17:02.212749 kernel: thunder_xcv, ver 1.0 Mar 2 13:17:02.212756 kernel: thunder_bgx, ver 1.0 Mar 2 13:17:02.212764 kernel: nicpf, ver 1.0 Mar 2 13:17:02.212771 kernel: nicvf, ver 1.0 Mar 2 13:17:02.212905 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 2 13:17:02.212983 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-02T13:17:01 UTC (1772457421) Mar 2 13:17:02.212994 kernel: efifb: probing for efifb Mar 2 13:17:02.213002 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 2 13:17:02.213009 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 2 13:17:02.213017 kernel: efifb: scrolling: redraw Mar 2 13:17:02.213024 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 13:17:02.213032 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:17:02.213039 kernel: fb0: EFI VGA frame buffer device Mar 2 13:17:02.213049 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 2 13:17:02.213056 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 2 13:17:02.213064 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 2 13:17:02.213072 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 2 13:17:02.213079 kernel: watchdog: Hard watchdog permanently disabled Mar 2 13:17:02.213087 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:17:02.213094 kernel: Segment Routing with IPv6 Mar 2 13:17:02.213101 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:17:02.213109 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:17:02.213118 kernel: Key type dns_resolver registered Mar 2 13:17:02.213125 kernel: registered taskstats version 1 Mar 2 13:17:02.213133 kernel: Loading compiled-in X.509 certificates Mar 2 13:17:02.213140 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 888055ac257926b028c9aac8084c1e2b1bcee773' Mar 2 13:17:02.213148 kernel: Key type .fscrypt registered Mar 2 13:17:02.213155 kernel: Key type fscrypt-provisioning registered Mar 2 13:17:02.213163 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:17:02.213170 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:17:02.213178 kernel: ima: No architecture policies found Mar 2 13:17:02.213187 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 2 13:17:02.213194 kernel: clk: Disabling unused clocks Mar 2 13:17:02.213202 kernel: Freeing unused kernel memory: 39424K Mar 2 13:17:02.213209 kernel: Run /init as init process Mar 2 13:17:02.213216 kernel: with arguments: Mar 2 13:17:02.213224 kernel: /init Mar 2 13:17:02.213231 kernel: with environment: Mar 2 13:17:02.213238 kernel: HOME=/ Mar 2 13:17:02.213246 kernel: TERM=linux Mar 2 13:17:02.213255 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:17:02.213267 systemd[1]: Detected virtualization microsoft. Mar 2 13:17:02.213275 systemd[1]: Detected architecture arm64. Mar 2 13:17:02.213283 systemd[1]: Running in initrd. Mar 2 13:17:02.213290 systemd[1]: No hostname configured, using default hostname. Mar 2 13:17:02.213298 systemd[1]: Hostname set to . Mar 2 13:17:02.213307 systemd[1]: Initializing machine ID from random generator. Mar 2 13:17:02.213317 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:17:02.213325 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:17:02.213334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:17:02.213342 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:17:02.213350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:17:02.213359 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:17:02.213367 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:17:02.213377 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:17:02.213386 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:17:02.213395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:17:02.213403 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:17:02.213411 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:17:02.213419 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:17:02.213427 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:17:02.213435 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:17:02.213443 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:17:02.213452 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:17:02.213461 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:17:02.213469 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:17:02.213477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:17:02.213485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:17:02.213493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:17:02.213501 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:17:02.213510 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:17:02.213520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:17:02.213528 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:17:02.213536 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:17:02.213544 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:17:02.213569 systemd-journald[217]: Collecting audit messages is disabled. Mar 2 13:17:02.213591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:17:02.213600 systemd-journald[217]: Journal started Mar 2 13:17:02.213634 systemd-journald[217]: Runtime Journal (/run/log/journal/86fcc8562cab4420b6233fadafa8a0c4) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:17:02.219208 systemd-modules-load[218]: Inserted module 'overlay' Mar 2 13:17:02.226908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:02.251573 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:17:02.251640 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:17:02.259449 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:17:02.271378 kernel: Bridge firewalling registered Mar 2 13:17:02.264569 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 2 13:17:02.266887 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:17:02.281354 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:17:02.290387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:17:02.298577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:02.315917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:17:02.323777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:17:02.350745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:17:02.362830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:17:02.373340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:02.383481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:17:02.388555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:17:02.405156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:17:02.425959 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:17:02.433087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:17:02.450405 dracut-cmdline[251]: dracut-dracut-053 Mar 2 13:17:02.461684 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:17:02.488204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:17:02.506568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:17:02.517213 systemd-resolved[253]: Positive Trust Anchors: Mar 2 13:17:02.517222 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:17:02.517253 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:17:02.519463 systemd-resolved[253]: Defaulting to hostname 'linux'. Mar 2 13:17:02.524733 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:17:02.536785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:17:02.629645 kernel: SCSI subsystem initialized Mar 2 13:17:02.638632 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:17:02.646647 kernel: iscsi: registered transport (tcp) Mar 2 13:17:02.663696 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:17:02.663757 kernel: QLogic iSCSI HBA Driver Mar 2 13:17:02.698303 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:17:02.711040 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:17:02.739432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:17:02.739467 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:17:02.744755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:17:02.795643 kernel: raid6: neonx8 gen() 15801 MB/s Mar 2 13:17:02.809627 kernel: raid6: neonx4 gen() 14945 MB/s Mar 2 13:17:02.828625 kernel: raid6: neonx2 gen() 13262 MB/s Mar 2 13:17:02.848626 kernel: raid6: neonx1 gen() 10485 MB/s Mar 2 13:17:02.867624 kernel: raid6: int64x8 gen() 6977 MB/s Mar 2 13:17:02.886624 kernel: raid6: int64x4 gen() 7365 MB/s Mar 2 13:17:02.906628 kernel: raid6: int64x2 gen() 6146 MB/s Mar 2 13:17:02.928799 kernel: raid6: int64x1 gen() 5072 MB/s Mar 2 13:17:02.928809 kernel: raid6: using algorithm neonx8 gen() 15801 MB/s Mar 2 13:17:02.951434 kernel: raid6: .... xor() 12045 MB/s, rmw enabled Mar 2 13:17:02.951444 kernel: raid6: using neon recovery algorithm Mar 2 13:17:02.962433 kernel: xor: measuring software checksum speed Mar 2 13:17:02.962456 kernel: 8regs : 19754 MB/sec Mar 2 13:17:02.965372 kernel: 32regs : 19622 MB/sec Mar 2 13:17:02.971560 kernel: arm64_neon : 26399 MB/sec Mar 2 13:17:02.971572 kernel: xor: using function: arm64_neon (26399 MB/sec) Mar 2 13:17:03.021634 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:17:03.031406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:17:03.044755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:17:03.065722 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 2 13:17:03.070295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:17:03.085748 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:17:03.107424 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Mar 2 13:17:03.135812 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:17:03.148895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:17:03.188454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:17:03.209030 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:17:03.235787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:17:03.244852 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:17:03.261037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:17:03.272786 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:17:03.296777 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:17:03.312649 kernel: hv_vmbus: Vmbus version:5.3 Mar 2 13:17:03.313997 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:17:03.318315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:03.348580 kernel: hv_vmbus: registering driver hv_netvsc Mar 2 13:17:03.348608 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 2 13:17:03.338899 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:17:03.373274 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 2 13:17:03.373300 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 2 13:17:03.373311 kernel: hv_vmbus: registering driver hv_storvsc Mar 2 13:17:03.356604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:17:03.384711 kernel: scsi host1: storvsc_host_t Mar 2 13:17:03.356795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:03.407073 kernel: scsi host0: storvsc_host_t Mar 2 13:17:03.407233 kernel: hv_vmbus: registering driver hid_hyperv Mar 2 13:17:03.407254 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 2 13:17:03.407265 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 2 13:17:03.388502 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:03.422280 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 2 13:17:03.422301 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: VF slot 1 added Mar 2 13:17:03.437555 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 2 13:17:03.437763 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 2 13:17:03.441017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:03.448240 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:17:03.475224 kernel: PTP clock support registered Mar 2 13:17:03.474822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:17:03.494925 kernel: hv_utils: Registering HyperV Utility Driver Mar 2 13:17:03.494955 kernel: hv_vmbus: registering driver hv_utils Mar 2 13:17:03.474965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:03.946733 kernel: hv_utils: Shutdown IC version 3.2 Mar 2 13:17:03.946756 kernel: hv_utils: Heartbeat IC version 3.0 Mar 2 13:17:03.946766 kernel: hv_utils: TimeSync IC version 4.0 Mar 2 13:17:03.946776 kernel: hv_vmbus: registering driver hv_pci Mar 2 13:17:03.504984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:03.945337 systemd-resolved[253]: Clock change detected. Flushing caches. Mar 2 13:17:03.972137 kernel: hv_pci 74b4f976-b7f8-47d4-9ccc-da24345b6e55: PCI VMBus probing: Using version 0x10004 Mar 2 13:17:03.972349 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 2 13:17:03.975808 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:17:03.979232 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 2 13:17:03.991475 kernel: hv_pci 74b4f976-b7f8-47d4-9ccc-da24345b6e55: PCI host bridge to bus b7f8:00 Mar 2 13:17:03.991696 kernel: pci_bus b7f8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 2 13:17:03.986974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:04.022757 kernel: pci_bus b7f8:00: No busn resource found for root bus, will use [bus 00-ff] Mar 2 13:17:04.022954 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 2 13:17:04.023081 kernel: pci b7f8:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 2 13:17:04.023106 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 2 13:17:04.023212 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 2 13:17:04.023355 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 2 13:17:04.023994 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 2 13:17:04.040923 kernel: pci b7f8:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:17:04.030600 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:17:04.057679 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:04.057700 kernel: pci b7f8:00:02.0: enabling Extended Tags Mar 2 13:17:04.057728 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 2 13:17:04.062344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#27 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:17:04.086277 kernel: pci b7f8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b7f8:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 2 13:17:04.100122 kernel: pci_bus b7f8:00: busn_res: [bus 00-ff] end is updated to 00 Mar 2 13:17:04.100374 kernel: pci b7f8:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:17:04.100595 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:04.138245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:17:04.166994 kernel: mlx5_core b7f8:00:02.0: enabling device (0000 -> 0002) Mar 2 13:17:04.173231 kernel: mlx5_core b7f8:00:02.0: firmware version: 16.30.5026 Mar 2 13:17:04.365238 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: VF registering: eth1 Mar 2 13:17:04.365435 kernel: mlx5_core b7f8:00:02.0 eth1: joined to eth0 Mar 2 13:17:04.374390 kernel: mlx5_core b7f8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 2 13:17:04.384251 kernel: mlx5_core b7f8:00:02.0 enP47096s1: renamed from eth1 Mar 2 13:17:04.642561 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 2 13:17:04.671248 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Mar 2 13:17:04.686827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:17:04.705080 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 2 13:17:04.758234 kernel: BTRFS: device fsid 0d0ab669-47ba-4267-b368-82e952673c8e devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (488) Mar 2 13:17:04.772290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 2 13:17:04.777829 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 2 13:17:04.802469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:17:04.826573 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:04.834229 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:04.843228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:05.846782 disk-uuid[613]: The operation has completed successfully. Mar 2 13:17:05.852766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:17:05.923067 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:17:05.924265 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:17:05.956346 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:17:05.966321 sh[726]: Success Mar 2 13:17:05.997408 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 2 13:17:06.276518 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:17:06.284358 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:17:06.291245 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:17:06.324014 kernel: BTRFS info (device dm-0): first mount of filesystem 0d0ab669-47ba-4267-b368-82e952673c8e Mar 2 13:17:06.324066 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:06.329578 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:17:06.333759 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:17:06.337272 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:17:06.764793 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:17:06.768951 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:17:06.786461 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:17:06.793407 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:17:06.832394 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:06.832449 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:06.836068 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:17:06.882237 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:17:06.886171 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:17:06.907338 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:06.907557 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:17:06.920202 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:17:06.928471 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:17:06.938036 systemd-networkd[907]: lo: Link UP Mar 2 13:17:06.938040 systemd-networkd[907]: lo: Gained carrier Mar 2 13:17:06.940164 systemd-networkd[907]: Enumeration completed Mar 2 13:17:06.941418 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:17:06.943610 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:17:06.943613 systemd-networkd[907]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:17:06.961045 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:17:06.975925 systemd[1]: Reached target network.target - Network. Mar 2 13:17:07.037233 kernel: mlx5_core b7f8:00:02.0 enP47096s1: Link up Mar 2 13:17:07.073250 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: Data path switched to VF: enP47096s1 Mar 2 13:17:07.074135 systemd-networkd[907]: enP47096s1: Link UP Mar 2 13:17:07.074243 systemd-networkd[907]: eth0: Link UP Mar 2 13:17:07.074346 systemd-networkd[907]: eth0: Gained carrier Mar 2 13:17:07.074355 systemd-networkd[907]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:17:07.094800 systemd-networkd[907]: enP47096s1: Gained carrier Mar 2 13:17:07.107260 systemd-networkd[907]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:17:07.932954 ignition[911]: Ignition 2.19.0 Mar 2 13:17:07.932971 ignition[911]: Stage: fetch-offline Mar 2 13:17:07.937697 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:17:07.933008 ignition[911]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:07.933017 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:07.933129 ignition[911]: parsed url from cmdline: "" Mar 2 13:17:07.933133 ignition[911]: no config URL provided Mar 2 13:17:07.933137 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:17:07.933147 ignition[911]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:17:07.961494 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 2 13:17:07.933153 ignition[911]: failed to fetch config: resource requires networking Mar 2 13:17:07.935941 ignition[911]: Ignition finished successfully Mar 2 13:17:07.981783 ignition[919]: Ignition 2.19.0 Mar 2 13:17:07.981790 ignition[919]: Stage: fetch Mar 2 13:17:07.982010 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:07.982023 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:07.982137 ignition[919]: parsed url from cmdline: "" Mar 2 13:17:07.982141 ignition[919]: no config URL provided Mar 2 13:17:07.982146 ignition[919]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:17:07.982153 ignition[919]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:17:07.982178 ignition[919]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 2 13:17:08.118451 ignition[919]: GET result: OK Mar 2 13:17:08.118521 ignition[919]: config has been read from IMDS userdata Mar 2 13:17:08.118566 ignition[919]: parsing config with SHA512: c37460fba49cf812d4d520978077b888417b24eecf33e1f824e963795be33d87d13ca7cf42ee4974547eb053ef3cec5e854d2edabe687c174e999473bbcf3023 Mar 2 13:17:08.122585 unknown[919]: fetched base config from "system" Mar 2 13:17:08.123045 ignition[919]: fetch: fetch complete Mar 2 13:17:08.122593 unknown[919]: fetched base config from "system" Mar 2 13:17:08.123052 ignition[919]: fetch: fetch passed Mar 2 13:17:08.122598 unknown[919]: fetched user config from "azure" Mar 2 13:17:08.123118 ignition[919]: Ignition finished successfully Mar 2 13:17:08.131015 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 2 13:17:08.153500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:17:08.175679 ignition[925]: Ignition 2.19.0 Mar 2 13:17:08.179954 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:17:08.175690 ignition[925]: Stage: kargs Mar 2 13:17:08.175859 ignition[925]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:08.175868 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:08.176906 ignition[925]: kargs: kargs passed Mar 2 13:17:08.198378 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:17:08.176961 ignition[925]: Ignition finished successfully Mar 2 13:17:08.214407 ignition[931]: Ignition 2.19.0 Mar 2 13:17:08.220263 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:17:08.214415 ignition[931]: Stage: disks Mar 2 13:17:08.226026 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:17:08.214624 ignition[931]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:08.235226 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:17:08.214635 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:08.242926 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:17:08.215686 ignition[931]: disks: disks passed Mar 2 13:17:08.251923 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:17:08.215734 ignition[931]: Ignition finished successfully Mar 2 13:17:08.260064 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:17:08.281489 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:17:08.286058 systemd-networkd[907]: eth0: Gained IPv6LL Mar 2 13:17:08.364669 systemd-fsck[940]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 2 13:17:08.375904 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:17:08.393476 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:17:08.454270 kernel: EXT4-fs (sda9): mounted filesystem a5f5c21d-8a27-4a94-875f-5735c39d000b r/w with ordered data mode. Quota mode: none. Mar 2 13:17:08.454838 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:17:08.458974 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:17:08.501313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:17:08.520229 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (951) Mar 2 13:17:08.530788 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:08.530801 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:08.535196 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:17:08.542588 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:17:08.551233 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:17:08.551410 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 2 13:17:08.562406 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:17:08.572231 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:17:08.578900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:17:08.586521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:17:08.604468 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:17:09.214662 coreos-metadata[968]: Mar 02 13:17:09.214 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:17:09.223708 coreos-metadata[968]: Mar 02 13:17:09.223 INFO Fetch successful Mar 2 13:17:09.223708 coreos-metadata[968]: Mar 02 13:17:09.223 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:17:09.237855 coreos-metadata[968]: Mar 02 13:17:09.235 INFO Fetch successful Mar 2 13:17:09.254265 coreos-metadata[968]: Mar 02 13:17:09.254 INFO wrote hostname ci-4081.3.101-5317e0e64c to /sysroot/etc/hostname Mar 2 13:17:09.262032 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:17:09.432948 initrd-setup-root[980]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:17:09.477193 initrd-setup-root[987]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:17:09.485797 initrd-setup-root[994]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:17:09.492440 initrd-setup-root[1001]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:17:10.747605 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:17:10.762714 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:17:10.772395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:17:10.789767 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:10.784876 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:17:10.814924 ignition[1069]: INFO : Ignition 2.19.0 Mar 2 13:17:10.814924 ignition[1069]: INFO : Stage: mount Mar 2 13:17:10.823596 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:10.823596 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:10.823596 ignition[1069]: INFO : mount: mount passed Mar 2 13:17:10.823596 ignition[1069]: INFO : Ignition finished successfully Mar 2 13:17:10.823302 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:17:10.847891 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:17:10.857242 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:17:10.874367 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:17:10.901390 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1080) Mar 2 13:17:10.901435 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:17:10.906614 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:17:10.910191 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:17:10.917228 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:17:10.919551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:17:10.946339 ignition[1098]: INFO : Ignition 2.19.0 Mar 2 13:17:10.946339 ignition[1098]: INFO : Stage: files Mar 2 13:17:10.953072 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:10.953072 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:10.953072 ignition[1098]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:17:10.953072 ignition[1098]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:17:10.953072 ignition[1098]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:17:10.998034 ignition[1098]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:17:11.004128 ignition[1098]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:17:11.004128 ignition[1098]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:17:11.000732 unknown[1098]: wrote ssh authorized keys file for user: core Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:17:11.019773 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 2 13:17:11.068393 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 13:17:11.187185 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:17:11.187185 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.204274 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 2 13:17:11.608708 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 13:17:11.867526 ignition[1098]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:17:11.867526 ignition[1098]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 2 13:17:11.882822 ignition[1098]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:17:11.893526 ignition[1098]: INFO : files: files passed Mar 2 13:17:11.893526 ignition[1098]: INFO : Ignition finished successfully Mar 2 13:17:11.904269 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:17:11.938637 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:17:11.946433 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:17:12.033075 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:17:12.033075 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:17:11.962432 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:17:12.052087 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:17:11.962528 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:17:11.972614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:17:11.982045 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:17:12.009773 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:17:12.050712 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:17:12.050826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:17:12.058974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:17:12.071207 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:17:12.080951 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:17:12.101474 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:17:12.140054 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:17:12.154467 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:17:12.172059 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:17:12.173251 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:17:12.182146 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:17:12.192926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:17:12.203302 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:17:12.212056 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:17:12.212127 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:17:12.226466 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:17:12.236132 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:17:12.244716 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:17:12.253400 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:17:12.263380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:17:12.273202 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:17:12.282727 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:17:12.293045 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:17:12.303050 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:17:12.312117 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:17:12.320272 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:17:12.320344 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:17:12.333023 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:17:12.338019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:17:12.348125 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:17:12.352763 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:17:12.358818 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:17:12.358878 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:17:12.374424 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:17:12.374485 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:17:12.385847 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:17:12.385946 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:17:12.394733 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 2 13:17:12.394791 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:17:12.419362 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:17:12.463429 ignition[1151]: INFO : Ignition 2.19.0 Mar 2 13:17:12.463429 ignition[1151]: INFO : Stage: umount Mar 2 13:17:12.463429 ignition[1151]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:17:12.463429 ignition[1151]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:17:12.434965 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:17:12.497759 ignition[1151]: INFO : umount: umount passed Mar 2 13:17:12.497759 ignition[1151]: INFO : Ignition finished successfully Mar 2 13:17:12.444961 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:17:12.445033 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:17:12.457270 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:17:12.457343 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:17:12.469610 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:17:12.470158 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:17:12.472322 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:17:12.479031 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:17:12.479177 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:17:12.488274 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:17:12.488335 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:17:12.502100 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 2 13:17:12.502154 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 2 13:17:12.509753 systemd[1]: Stopped target network.target - Network. Mar 2 13:17:12.519985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:17:12.520067 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:17:12.529817 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:17:12.539661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:17:12.549243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:17:12.557828 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:17:12.567896 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:17:12.576114 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:17:12.576164 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:17:12.584900 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:17:12.584977 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:17:12.593764 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:17:12.593811 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:17:12.602142 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:17:12.602177 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:17:12.611915 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:17:12.621365 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:17:12.644256 systemd-networkd[907]: eth0: DHCPv6 lease lost Mar 2 13:17:12.645789 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:17:12.645940 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:17:12.657012 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:17:12.657120 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:17:12.668047 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:17:12.668098 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:17:12.690455 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:17:12.698365 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:17:12.844521 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: Data path switched from VF: enP47096s1 Mar 2 13:17:12.698431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:17:12.710850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:17:12.710909 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:17:12.718925 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:17:12.718975 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:17:12.728179 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:17:12.728232 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:17:12.737919 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:17:12.753396 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:17:12.753493 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:17:12.777640 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:17:12.777763 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:17:12.788024 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:17:12.788128 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:17:12.796319 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:17:12.796378 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:17:12.805392 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:17:12.805448 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:17:12.819779 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:17:12.819834 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:17:12.844571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:17:12.844630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:17:12.855566 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:17:12.855612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:17:12.874392 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:17:12.888280 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:17:12.888347 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:17:12.897155 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:17:12.897197 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:17:12.908327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:17:12.908377 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:17:12.919639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:17:12.919680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:12.929477 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:17:12.929602 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:17:12.938779 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:17:12.940938 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:17:12.947771 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:17:12.972483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:17:13.237671 systemd[1]: Switching root. Mar 2 13:17:13.265715 systemd-journald[217]: Journal stopped Mar 2 13:17:18.343910 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 2 13:17:18.343935 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:17:18.343945 kernel: SELinux: policy capability open_perms=1 Mar 2 13:17:18.343954 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:17:18.343962 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:17:18.343970 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:17:18.343978 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:17:18.343987 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:17:18.343994 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:17:18.344003 kernel: audit: type=1403 audit(1772457435.257:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:17:18.344013 systemd[1]: Successfully loaded SELinux policy in 161.385ms. Mar 2 13:17:18.344022 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.775ms. Mar 2 13:17:18.344034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:17:18.344043 systemd[1]: Detected virtualization microsoft. Mar 2 13:17:18.344053 systemd[1]: Detected architecture arm64. Mar 2 13:17:18.344063 systemd[1]: Detected first boot. Mar 2 13:17:18.344073 systemd[1]: Hostname set to . Mar 2 13:17:18.344082 systemd[1]: Initializing machine ID from random generator. Mar 2 13:17:18.344091 zram_generator::config[1209]: No configuration found. Mar 2 13:17:18.344100 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:17:18.344110 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:17:18.344120 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 2 13:17:18.344130 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:17:18.344139 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:17:18.344148 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:17:18.344158 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:17:18.344167 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:17:18.344176 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:17:18.344187 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:17:18.344196 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:17:18.344205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:17:18.344222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:17:18.344234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:17:18.344245 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:17:18.344255 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:17:18.344264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:17:18.344273 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 2 13:17:18.344284 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:17:18.344294 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:17:18.344303 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:17:18.344315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:17:18.344324 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:17:18.344334 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:17:18.344343 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:17:18.344354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:17:18.344364 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:17:18.344373 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:17:18.344382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:17:18.344392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:17:18.344401 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:17:18.344411 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:17:18.344422 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:17:18.344432 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:17:18.344441 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:17:18.344452 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:17:18.344461 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:17:18.344471 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:17:18.344482 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:17:18.344492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:17:18.344501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:17:18.344511 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:17:18.344521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:17:18.344530 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:17:18.344540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:17:18.344549 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:17:18.344558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:17:18.344570 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:17:18.344580 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 2 13:17:18.344589 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 2 13:17:18.344599 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:17:18.344609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:17:18.344618 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:17:18.344628 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:17:18.344638 kernel: loop: module loaded Mar 2 13:17:18.344662 systemd-journald[1305]: Collecting audit messages is disabled. Mar 2 13:17:18.344682 kernel: fuse: init (API version 7.39) Mar 2 13:17:18.344691 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:17:18.344702 systemd-journald[1305]: Journal started Mar 2 13:17:18.344723 systemd-journald[1305]: Runtime Journal (/run/log/journal/348418f4a6af41cd93f03ee3dc2faad8) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:17:18.366930 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:17:18.368193 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:17:18.382234 kernel: ACPI: bus type drm_connector registered Mar 2 13:17:18.376677 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:17:18.387760 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:17:18.392422 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:17:18.397336 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:17:18.402266 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:17:18.406708 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:17:18.413166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:17:18.419141 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:17:18.419450 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:17:18.425095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:17:18.425526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:17:18.431027 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:17:18.431178 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:17:18.436138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:17:18.436343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:17:18.441959 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:17:18.442103 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:17:18.447402 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:17:18.447579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:17:18.452805 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:17:18.458485 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:17:18.464839 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:17:18.471182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:17:18.485364 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:17:18.494297 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:17:18.503369 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:17:18.508843 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:17:18.513451 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:17:18.519727 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:17:18.524947 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:17:18.526101 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:17:18.530971 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:17:18.534568 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:17:18.542366 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:17:18.553409 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:17:18.563406 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:17:18.568918 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:17:18.574896 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:17:18.583694 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:17:18.589145 udevadm[1370]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:17:18.617642 systemd-journald[1305]: Time spent on flushing to /var/log/journal/348418f4a6af41cd93f03ee3dc2faad8 is 12.700ms for 888 entries. Mar 2 13:17:18.617642 systemd-journald[1305]: System Journal (/var/log/journal/348418f4a6af41cd93f03ee3dc2faad8) is 8.0M, max 2.6G, 2.6G free. Mar 2 13:17:18.648999 systemd-journald[1305]: Received client request to flush runtime journal. Mar 2 13:17:18.629877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:17:18.645123 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Mar 2 13:17:18.645135 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Mar 2 13:17:18.653154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:17:18.659533 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:17:18.674412 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:17:18.791012 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:17:18.800394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:17:18.821001 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 2 13:17:18.821019 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 2 13:17:18.824877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:17:19.304702 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:17:19.314416 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:17:19.345572 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Mar 2 13:17:19.507143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:17:19.525389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:17:19.578469 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:17:19.616462 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 2 13:17:19.629241 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:17:19.668465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#2 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:17:19.692008 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:17:19.711820 kernel: hv_vmbus: registering driver hv_balloon Mar 2 13:17:19.711889 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 2 13:17:19.716820 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 2 13:17:19.752739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:19.768239 kernel: hv_vmbus: registering driver hyperv_fb Mar 2 13:17:19.768300 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 2 13:17:19.774358 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 2 13:17:19.778731 kernel: Console: switching to colour dummy device 80x25 Mar 2 13:17:19.781368 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:17:19.809402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:17:19.809693 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:19.810241 systemd-networkd[1401]: lo: Link UP Mar 2 13:17:19.810245 systemd-networkd[1401]: lo: Gained carrier Mar 2 13:17:19.812525 systemd-networkd[1401]: Enumeration completed Mar 2 13:17:19.812906 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:17:19.812966 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:17:19.817548 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:17:19.827442 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:17:19.861248 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1410) Mar 2 13:17:19.862401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:17:19.892397 kernel: mlx5_core b7f8:00:02.0 enP47096s1: Link up Mar 2 13:17:19.923883 systemd-networkd[1401]: enP47096s1: Link UP Mar 2 13:17:19.924021 systemd-networkd[1401]: eth0: Link UP Mar 2 13:17:19.924025 systemd-networkd[1401]: eth0: Gained carrier Mar 2 13:17:19.924039 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:17:19.924694 kernel: hv_netvsc 7ced8d87-5586-7ced-8d87-55867ced8d87 eth0: Data path switched to VF: enP47096s1 Mar 2 13:17:19.932290 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:17:19.935753 systemd-networkd[1401]: enP47096s1: Gained carrier Mar 2 13:17:19.941374 systemd-networkd[1401]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:17:19.995358 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:17:20.005405 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:17:20.128828 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:17:20.164739 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:17:20.171320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:17:20.182338 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:17:20.187744 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:17:20.211770 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:17:20.218949 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:17:20.224523 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:17:20.224556 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:17:20.229702 systemd[1]: Reached target machines.target - Containers. Mar 2 13:17:20.235034 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:17:20.248355 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:17:20.255490 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:17:20.260027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:17:20.262374 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:17:20.272143 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:17:20.279876 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:17:20.288587 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:17:20.327712 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:17:20.329735 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:17:20.347909 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:17:20.358288 kernel: loop0: detected capacity change from 0 to 114432 Mar 2 13:17:20.400740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:17:20.742248 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:17:20.784236 kernel: loop1: detected capacity change from 0 to 31320 Mar 2 13:17:21.188393 kernel: loop2: detected capacity change from 0 to 209336 Mar 2 13:17:21.210307 systemd-networkd[1401]: eth0: Gained IPv6LL Mar 2 13:17:21.217179 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:17:21.247236 kernel: loop3: detected capacity change from 0 to 114328 Mar 2 13:17:21.665241 kernel: loop4: detected capacity change from 0 to 114432 Mar 2 13:17:21.677239 kernel: loop5: detected capacity change from 0 to 31320 Mar 2 13:17:21.689232 kernel: loop6: detected capacity change from 0 to 209336 Mar 2 13:17:21.705234 kernel: loop7: detected capacity change from 0 to 114328 Mar 2 13:17:21.711930 (sd-merge)[1516]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 2 13:17:21.712367 (sd-merge)[1516]: Merged extensions into '/usr'. Mar 2 13:17:21.717870 systemd[1]: Reloading requested from client PID 1495 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:17:21.717884 systemd[1]: Reloading... Mar 2 13:17:21.773254 zram_generator::config[1546]: No configuration found. Mar 2 13:17:21.905892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:17:21.975740 systemd[1]: Reloading finished in 257 ms. Mar 2 13:17:21.991464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:17:22.003355 systemd[1]: Starting ensure-sysext.service... Mar 2 13:17:22.008226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:17:22.015776 systemd[1]: Reloading requested from client PID 1604 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:17:22.015789 systemd[1]: Reloading... Mar 2 13:17:22.032440 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:17:22.033089 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:17:22.034539 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:17:22.034892 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Mar 2 13:17:22.035012 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Mar 2 13:17:22.057543 systemd-tmpfiles[1605]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:17:22.057709 systemd-tmpfiles[1605]: Skipping /boot Mar 2 13:17:22.066235 systemd-tmpfiles[1605]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:17:22.066393 systemd-tmpfiles[1605]: Skipping /boot Mar 2 13:17:22.093311 zram_generator::config[1637]: No configuration found. Mar 2 13:17:22.214162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:17:22.288845 systemd[1]: Reloading finished in 272 ms. Mar 2 13:17:22.303280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:17:22.320393 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:17:22.355397 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:17:22.362622 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:17:22.376423 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:17:22.385059 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:17:22.393745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:17:22.395455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:17:22.401779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:17:22.414477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:17:22.422394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:17:22.423155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:17:22.423401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:17:22.438448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:17:22.438606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:17:22.446942 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:17:22.447140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:17:22.468908 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:17:22.480914 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:17:22.494615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:17:22.498457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:17:22.507508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:17:22.513054 augenrules[1733]: No rules Mar 2 13:17:22.515489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:17:22.528529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:17:22.537010 systemd-resolved[1708]: Positive Trust Anchors: Mar 2 13:17:22.537027 systemd-resolved[1708]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:17:22.537058 systemd-resolved[1708]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:17:22.537634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:17:22.537956 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:17:22.544523 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:17:22.550139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:17:22.550332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:17:22.556080 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:17:22.556256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:17:22.562083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:17:22.562329 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:17:22.568741 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:17:22.568927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:17:22.578209 systemd[1]: Finished ensure-sysext.service. Mar 2 13:17:22.586026 systemd-resolved[1708]: Using system hostname 'ci-4081.3.101-5317e0e64c'. Mar 2 13:17:22.586372 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:17:22.586451 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:17:22.587862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:17:22.593645 systemd[1]: Reached target network.target - Network. Mar 2 13:17:22.597643 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:17:22.602533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:17:23.057475 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:17:23.063732 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:17:26.117299 ldconfig[1492]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:17:26.132676 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:17:26.148486 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:17:26.162575 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:17:26.167801 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:17:26.172727 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:17:26.178696 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:17:26.184416 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:17:26.189438 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:17:26.195162 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:17:26.200674 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:17:26.200716 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:17:26.205005 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:17:26.211274 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:17:26.217926 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:17:26.223922 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:17:26.230349 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:17:26.235375 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:17:26.239825 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:17:26.244157 systemd[1]: System is tainted: cgroupsv1 Mar 2 13:17:26.244200 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:17:26.244235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:17:26.248330 systemd[1]: Starting chronyd.service - NTP client/server... Mar 2 13:17:26.253440 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:17:26.259694 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 2 13:17:26.266440 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:17:26.278345 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:17:26.290379 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:17:26.299509 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:17:26.299583 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 2 13:17:26.300267 (chronyd)[1764]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 2 13:17:26.305430 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 2 13:17:26.312569 jq[1770]: false Mar 2 13:17:26.312541 KVP[1773]: KVP starting; pid is:1773 Mar 2 13:17:26.311346 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 2 13:17:26.313377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:17:26.324600 chronyd[1778]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 2 13:17:26.328775 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:17:26.336432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:17:26.346339 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:17:26.357519 KVP[1773]: KVP LIC Version: 3.1 Mar 2 13:17:26.358238 kernel: hv_utils: KVP IC version 4.0 Mar 2 13:17:26.361404 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:17:26.360132 chronyd[1778]: Timezone right/UTC failed leap second check, ignoring Mar 2 13:17:26.366432 chronyd[1778]: Loaded seccomp filter (level 2) Mar 2 13:17:26.369084 extend-filesystems[1772]: Found loop4 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found loop5 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found loop6 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found loop7 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda1 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda2 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda3 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found usr Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda4 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda6 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda7 Mar 2 13:17:26.372741 extend-filesystems[1772]: Found sda9 Mar 2 13:17:26.372741 extend-filesystems[1772]: Checking size of /dev/sda9 Mar 2 13:17:26.512554 extend-filesystems[1772]: Old size kept for /dev/sda9 Mar 2 13:17:26.512554 extend-filesystems[1772]: Found sr0 Mar 2 13:17:26.381619 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:17:26.445083 dbus-daemon[1767]: [system] SELinux support is enabled Mar 2 13:17:26.399555 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:17:26.418312 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:17:26.431422 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:17:26.443631 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:17:26.544512 update_engine[1796]: I20260302 13:17:26.490873 1796 main.cc:92] Flatcar Update Engine starting Mar 2 13:17:26.544512 update_engine[1796]: I20260302 13:17:26.495170 1796 update_check_scheduler.cc:74] Next update check in 4m39s Mar 2 13:17:26.467022 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:17:26.544818 jq[1801]: true Mar 2 13:17:26.488032 systemd[1]: Started chronyd.service - NTP client/server. Mar 2 13:17:26.511755 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:17:26.512014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:17:26.516476 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:17:26.516750 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:17:26.541674 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:17:26.541915 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:17:26.552152 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:17:26.567600 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:17:26.567860 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:17:26.573793 coreos-metadata[1766]: Mar 02 13:17:26.573 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:17:26.576746 coreos-metadata[1766]: Mar 02 13:17:26.576 INFO Fetch successful Mar 2 13:17:26.576746 coreos-metadata[1766]: Mar 02 13:17:26.576 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 2 13:17:26.585073 coreos-metadata[1766]: Mar 02 13:17:26.585 INFO Fetch successful Mar 2 13:17:26.585073 coreos-metadata[1766]: Mar 02 13:17:26.585 INFO Fetching http://168.63.129.16/machine/84658e62-7ce8-44d5-80a8-04cf57c37ee4/7765fed5%2D97a7%2D4f88%2D9ee1%2D891bac8b1e5a.%5Fci%2D4081.3.101%2D5317e0e64c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 2 13:17:26.587599 coreos-metadata[1766]: Mar 02 13:17:26.587 INFO Fetch successful Mar 2 13:17:26.592826 coreos-metadata[1766]: Mar 02 13:17:26.592 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:17:26.606297 coreos-metadata[1766]: Mar 02 13:17:26.605 INFO Fetch successful Mar 2 13:17:26.615000 (ntainerd)[1823]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:17:26.621338 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:17:26.621385 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:17:26.628596 systemd-logind[1790]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:17:26.629908 systemd-logind[1790]: New seat seat0. Mar 2 13:17:26.632616 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:17:26.632643 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:17:26.639532 jq[1822]: true Mar 2 13:17:26.643864 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:17:26.670237 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1832) Mar 2 13:17:26.691070 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:17:26.692685 tar[1820]: linux-arm64/LICENSE Mar 2 13:17:26.692685 tar[1820]: linux-arm64/helm Mar 2 13:17:26.701725 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:17:26.704477 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:17:26.781394 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 2 13:17:26.789346 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:17:26.948209 bash[1893]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:17:26.955679 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:17:26.969511 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:17:27.074122 locksmithd[1863]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:17:27.209022 sshd_keygen[1806]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:17:27.236934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:17:27.254600 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:17:27.262867 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 2 13:17:27.279867 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:17:27.280133 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:17:27.297588 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:17:27.328723 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:17:27.342524 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:17:27.359586 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 2 13:17:27.366594 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:17:27.382381 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 2 13:17:27.478913 tar[1820]: linux-arm64/README.md Mar 2 13:17:27.491380 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:17:27.531231 containerd[1823]: time="2026-03-02T13:17:27.529895620Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:17:27.558381 containerd[1823]: time="2026-03-02T13:17:27.558334220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:17:27.559840 containerd[1823]: time="2026-03-02T13:17:27.559808820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:17:27.559940 containerd[1823]: time="2026-03-02T13:17:27.559926100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:17:27.560002 containerd[1823]: time="2026-03-02T13:17:27.559990220Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:17:27.560209 containerd[1823]: time="2026-03-02T13:17:27.560192060Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:17:27.560288 containerd[1823]: time="2026-03-02T13:17:27.560274940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:17:27.560424 containerd[1823]: time="2026-03-02T13:17:27.560406380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:17:27.560486 containerd[1823]: time="2026-03-02T13:17:27.560473940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:17:27.560779 containerd[1823]: time="2026-03-02T13:17:27.560758340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:17:27.560851 containerd[1823]: time="2026-03-02T13:17:27.560838260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:17:27.560906 containerd[1823]: time="2026-03-02T13:17:27.560893540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:17:27.560958 containerd[1823]: time="2026-03-02T13:17:27.560946940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:17:27.561096 containerd[1823]: time="2026-03-02T13:17:27.561079940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:17:27.561455 containerd[1823]: time="2026-03-02T13:17:27.561436220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:17:27.561670 containerd[1823]: time="2026-03-02T13:17:27.561650500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:17:27.561745 containerd[1823]: time="2026-03-02T13:17:27.561732020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:17:27.561872 containerd[1823]: time="2026-03-02T13:17:27.561857180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:17:27.561979 containerd[1823]: time="2026-03-02T13:17:27.561964900Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:17:27.581314 containerd[1823]: time="2026-03-02T13:17:27.581271300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:17:27.581475 containerd[1823]: time="2026-03-02T13:17:27.581462260Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:17:27.581622 containerd[1823]: time="2026-03-02T13:17:27.581609460Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:17:27.581702 containerd[1823]: time="2026-03-02T13:17:27.581689740Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.581769260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.581965780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582314100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582434620Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582451580Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582465620Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582479260Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582493260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582511780Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582526260Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582541780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582556220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582569100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583235 containerd[1823]: time="2026-03-02T13:17:27.582583620Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582605180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582619100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582632140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582645420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582657540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582672100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582684340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582697700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582710340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582724020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582749940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582763340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582775620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582796420Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:17:27.583588 containerd[1823]: time="2026-03-02T13:17:27.582825700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582840700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582852100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582904100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582926460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582937700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582949100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582958580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582970460Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582982140Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:17:27.583888 containerd[1823]: time="2026-03-02T13:17:27.582994580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:17:27.584245 containerd[1823]: time="2026-03-02T13:17:27.584173820Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:17:27.584432 containerd[1823]: time="2026-03-02T13:17:27.584414500Z" level=info msg="Connect containerd service" Mar 2 13:17:27.584514 containerd[1823]: time="2026-03-02T13:17:27.584502500Z" level=info msg="using legacy CRI server" Mar 2 13:17:27.584564 containerd[1823]: time="2026-03-02T13:17:27.584553220Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:17:27.584729 containerd[1823]: time="2026-03-02T13:17:27.584713700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:17:27.585429 containerd[1823]: time="2026-03-02T13:17:27.585403620Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:17:27.585681 containerd[1823]: time="2026-03-02T13:17:27.585638060Z" level=info msg="Start subscribing containerd event" Mar 2 13:17:27.585720 containerd[1823]: time="2026-03-02T13:17:27.585698660Z" level=info msg="Start recovering state" Mar 2 13:17:27.585787 containerd[1823]: time="2026-03-02T13:17:27.585773100Z" level=info msg="Start event monitor" Mar 2 13:17:27.585814 containerd[1823]: time="2026-03-02T13:17:27.585787940Z" level=info msg="Start snapshots syncer" Mar 2 13:17:27.585814 containerd[1823]: time="2026-03-02T13:17:27.585798100Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:17:27.585814 containerd[1823]: time="2026-03-02T13:17:27.585805740Z" level=info msg="Start streaming server" Mar 2 13:17:27.586014 containerd[1823]: time="2026-03-02T13:17:27.585995540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:17:27.586121 containerd[1823]: time="2026-03-02T13:17:27.586108660Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:17:27.586264 containerd[1823]: time="2026-03-02T13:17:27.586247820Z" level=info msg="containerd successfully booted in 0.057281s" Mar 2 13:17:27.586912 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:17:27.644443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:17:27.652167 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:17:27.654112 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:17:27.659137 systemd[1]: Startup finished in 13.567s (kernel) + 12.561s (userspace) = 26.128s. Mar 2 13:17:27.981971 login[1932]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:27.986937 login[1934]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:28.057734 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:17:28.064530 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:17:28.067742 systemd-logind[1790]: New session 1 of user core. Mar 2 13:17:28.071694 systemd-logind[1790]: New session 2 of user core. Mar 2 13:17:28.096846 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:17:28.104905 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:17:28.110662 kubelet[1954]: E0302 13:17:28.110611 1954 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:17:28.115610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:17:28.115785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:17:28.133909 (systemd)[1968]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:17:28.251099 systemd[1968]: Queued start job for default target default.target. Mar 2 13:17:28.251840 systemd[1968]: Created slice app.slice - User Application Slice. Mar 2 13:17:28.252024 systemd[1968]: Reached target paths.target - Paths. Mar 2 13:17:28.252106 systemd[1968]: Reached target timers.target - Timers. Mar 2 13:17:28.262310 systemd[1968]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:17:28.269492 systemd[1968]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:17:28.269557 systemd[1968]: Reached target sockets.target - Sockets. Mar 2 13:17:28.269570 systemd[1968]: Reached target basic.target - Basic System. Mar 2 13:17:28.269620 systemd[1968]: Reached target default.target - Main User Target. Mar 2 13:17:28.269645 systemd[1968]: Startup finished in 129ms. Mar 2 13:17:28.269732 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:17:28.270949 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:17:28.273686 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:17:29.183480 waagent[1935]: 2026-03-02T13:17:29.183388Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 2 13:17:29.188318 waagent[1935]: 2026-03-02T13:17:29.188246Z INFO Daemon Daemon OS: flatcar 4081.3.101 Mar 2 13:17:29.192096 waagent[1935]: 2026-03-02T13:17:29.192045Z INFO Daemon Daemon Python: 3.11.9 Mar 2 13:17:29.197235 waagent[1935]: 2026-03-02T13:17:29.196289Z INFO Daemon Daemon Run daemon Mar 2 13:17:29.199619 waagent[1935]: 2026-03-02T13:17:29.199574Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.101' Mar 2 13:17:29.206748 waagent[1935]: 2026-03-02T13:17:29.206693Z INFO Daemon Daemon Using waagent for provisioning Mar 2 13:17:29.211053 waagent[1935]: 2026-03-02T13:17:29.211010Z INFO Daemon Daemon Activate resource disk Mar 2 13:17:29.214728 waagent[1935]: 2026-03-02T13:17:29.214688Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 2 13:17:29.224441 waagent[1935]: 2026-03-02T13:17:29.224377Z INFO Daemon Daemon Found device: None Mar 2 13:17:29.228346 waagent[1935]: 2026-03-02T13:17:29.228300Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 2 13:17:29.235715 waagent[1935]: 2026-03-02T13:17:29.235671Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 2 13:17:29.247382 waagent[1935]: 2026-03-02T13:17:29.247321Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 2 13:17:29.252568 waagent[1935]: 2026-03-02T13:17:29.252520Z INFO Daemon Daemon Running default provisioning handler Mar 2 13:17:29.264415 waagent[1935]: 2026-03-02T13:17:29.264350Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 2 13:17:29.276742 waagent[1935]: 2026-03-02T13:17:29.276677Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 2 13:17:29.285158 waagent[1935]: 2026-03-02T13:17:29.285104Z INFO Daemon Daemon cloud-init is enabled: False Mar 2 13:17:29.289439 waagent[1935]: 2026-03-02T13:17:29.289395Z INFO Daemon Daemon Copying ovf-env.xml Mar 2 13:17:29.386540 waagent[1935]: 2026-03-02T13:17:29.386436Z INFO Daemon Daemon Successfully mounted dvd Mar 2 13:17:29.401140 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 2 13:17:29.403099 waagent[1935]: 2026-03-02T13:17:29.403014Z INFO Daemon Daemon Detect protocol endpoint Mar 2 13:17:29.407428 waagent[1935]: 2026-03-02T13:17:29.407370Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 2 13:17:29.412482 waagent[1935]: 2026-03-02T13:17:29.412436Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 2 13:17:29.418359 waagent[1935]: 2026-03-02T13:17:29.418318Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 2 13:17:29.422971 waagent[1935]: 2026-03-02T13:17:29.422928Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 2 13:17:29.427219 waagent[1935]: 2026-03-02T13:17:29.427175Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 2 13:17:29.459280 waagent[1935]: 2026-03-02T13:17:29.459192Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 2 13:17:29.464947 waagent[1935]: 2026-03-02T13:17:29.464918Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 2 13:17:29.469287 waagent[1935]: 2026-03-02T13:17:29.469234Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 2 13:17:29.674768 waagent[1935]: 2026-03-02T13:17:29.674659Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 2 13:17:29.680281 waagent[1935]: 2026-03-02T13:17:29.680226Z INFO Daemon Daemon Forcing an update of the goal state. Mar 2 13:17:29.688647 waagent[1935]: 2026-03-02T13:17:29.688598Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 2 13:17:29.708069 waagent[1935]: 2026-03-02T13:17:29.708030Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 2 13:17:29.712895 waagent[1935]: 2026-03-02T13:17:29.712825Z INFO Daemon Mar 2 13:17:29.715157 waagent[1935]: 2026-03-02T13:17:29.715118Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 49cfb29b-5ee1-4cd3-bcdc-abc708edd8eb eTag: 2323345806440075647 source: Fabric] Mar 2 13:17:29.724397 waagent[1935]: 2026-03-02T13:17:29.724355Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 2 13:17:29.730088 waagent[1935]: 2026-03-02T13:17:29.730046Z INFO Daemon Mar 2 13:17:29.732364 waagent[1935]: 2026-03-02T13:17:29.732327Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 2 13:17:29.742159 waagent[1935]: 2026-03-02T13:17:29.742125Z INFO Daemon Daemon Downloading artifacts profile blob Mar 2 13:17:29.877897 waagent[1935]: 2026-03-02T13:17:29.877804Z INFO Daemon Downloaded certificate {'thumbprint': '76111F100B6F104AD318502B00367D8657BFA26D', 'hasPrivateKey': True} Mar 2 13:17:29.886626 waagent[1935]: 2026-03-02T13:17:29.886574Z INFO Daemon Fetch goal state completed Mar 2 13:17:29.932550 waagent[1935]: 2026-03-02T13:17:29.932492Z INFO Daemon Daemon Starting provisioning Mar 2 13:17:29.936683 waagent[1935]: 2026-03-02T13:17:29.936626Z INFO Daemon Daemon Handle ovf-env.xml. Mar 2 13:17:29.940617 waagent[1935]: 2026-03-02T13:17:29.940574Z INFO Daemon Daemon Set hostname [ci-4081.3.101-5317e0e64c] Mar 2 13:17:29.947465 waagent[1935]: 2026-03-02T13:17:29.947407Z INFO Daemon Daemon Publish hostname [ci-4081.3.101-5317e0e64c] Mar 2 13:17:29.952750 waagent[1935]: 2026-03-02T13:17:29.952696Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 2 13:17:29.958039 waagent[1935]: 2026-03-02T13:17:29.957991Z INFO Daemon Daemon Primary interface is [eth0] Mar 2 13:17:30.001915 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:17:30.001921 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:17:30.001968 systemd-networkd[1401]: eth0: DHCP lease lost Mar 2 13:17:30.003403 waagent[1935]: 2026-03-02T13:17:30.003318Z INFO Daemon Daemon Create user account if not exists Mar 2 13:17:30.008139 waagent[1935]: 2026-03-02T13:17:30.008081Z INFO Daemon Daemon User core already exists, skip useradd Mar 2 13:17:30.012712 waagent[1935]: 2026-03-02T13:17:30.012661Z INFO Daemon Daemon Configure sudoer Mar 2 13:17:30.016032 systemd-networkd[1401]: eth0: DHCPv6 lease lost Mar 2 13:17:30.016950 waagent[1935]: 2026-03-02T13:17:30.016897Z INFO Daemon Daemon Configure sshd Mar 2 13:17:30.020759 waagent[1935]: 2026-03-02T13:17:30.020710Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 2 13:17:30.030740 waagent[1935]: 2026-03-02T13:17:30.030683Z INFO Daemon Daemon Deploy ssh public key. Mar 2 13:17:30.045264 systemd-networkd[1401]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:17:31.145998 waagent[1935]: 2026-03-02T13:17:31.145946Z INFO Daemon Daemon Provisioning complete Mar 2 13:17:31.163346 waagent[1935]: 2026-03-02T13:17:31.163294Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 2 13:17:31.168861 waagent[1935]: 2026-03-02T13:17:31.168808Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 2 13:17:31.177436 waagent[1935]: 2026-03-02T13:17:31.177382Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 2 13:17:31.317793 waagent[2024]: 2026-03-02T13:17:31.317060Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 2 13:17:31.317793 waagent[2024]: 2026-03-02T13:17:31.317252Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.101 Mar 2 13:17:31.317793 waagent[2024]: 2026-03-02T13:17:31.317317Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 2 13:17:31.366453 waagent[2024]: 2026-03-02T13:17:31.366355Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.101; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 2 13:17:31.366816 waagent[2024]: 2026-03-02T13:17:31.366772Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:17:31.366958 waagent[2024]: 2026-03-02T13:17:31.366925Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:17:31.376009 waagent[2024]: 2026-03-02T13:17:31.375911Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 2 13:17:31.382652 waagent[2024]: 2026-03-02T13:17:31.382596Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 2 13:17:31.383401 waagent[2024]: 2026-03-02T13:17:31.383354Z INFO ExtHandler Mar 2 13:17:31.384244 waagent[2024]: 2026-03-02T13:17:31.383553Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e01f7b0d-f75f-483f-8ec9-17084ba771d5 eTag: 2323345806440075647 source: Fabric] Mar 2 13:17:31.384244 waagent[2024]: 2026-03-02T13:17:31.383914Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 2 13:17:31.384737 waagent[2024]: 2026-03-02T13:17:31.384686Z INFO ExtHandler Mar 2 13:17:31.384885 waagent[2024]: 2026-03-02T13:17:31.384853Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 2 13:17:31.389637 waagent[2024]: 2026-03-02T13:17:31.389588Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 2 13:17:31.465103 waagent[2024]: 2026-03-02T13:17:31.464970Z INFO ExtHandler Downloaded certificate {'thumbprint': '76111F100B6F104AD318502B00367D8657BFA26D', 'hasPrivateKey': True} Mar 2 13:17:31.465843 waagent[2024]: 2026-03-02T13:17:31.465793Z INFO ExtHandler Fetch goal state completed Mar 2 13:17:31.482205 waagent[2024]: 2026-03-02T13:17:31.482118Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2024 Mar 2 13:17:31.483242 waagent[2024]: 2026-03-02T13:17:31.482530Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 2 13:17:31.484483 waagent[2024]: 2026-03-02T13:17:31.484423Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.101', '', 'Flatcar Container Linux by Kinvolk'] Mar 2 13:17:31.484976 waagent[2024]: 2026-03-02T13:17:31.484933Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 2 13:17:31.508024 waagent[2024]: 2026-03-02T13:17:31.507983Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 2 13:17:31.508416 waagent[2024]: 2026-03-02T13:17:31.508363Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 2 13:17:31.514873 waagent[2024]: 2026-03-02T13:17:31.514814Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 2 13:17:31.522505 systemd[1]: Reloading requested from client PID 2037 ('systemctl') (unit waagent.service)... Mar 2 13:17:31.522519 systemd[1]: Reloading... Mar 2 13:17:31.600252 zram_generator::config[2074]: No configuration found. Mar 2 13:17:31.720653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:17:31.801748 systemd[1]: Reloading finished in 278 ms. Mar 2 13:17:31.821148 waagent[2024]: 2026-03-02T13:17:31.820998Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 2 13:17:31.829061 systemd[1]: Reloading requested from client PID 2130 ('systemctl') (unit waagent.service)... Mar 2 13:17:31.829077 systemd[1]: Reloading... Mar 2 13:17:31.920247 zram_generator::config[2167]: No configuration found. Mar 2 13:17:32.034923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:17:32.110074 systemd[1]: Reloading finished in 280 ms. Mar 2 13:17:32.129283 waagent[2024]: 2026-03-02T13:17:32.128429Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 2 13:17:32.129283 waagent[2024]: 2026-03-02T13:17:32.128602Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 2 13:17:32.550716 waagent[2024]: 2026-03-02T13:17:32.550622Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 2 13:17:32.551301 waagent[2024]: 2026-03-02T13:17:32.551253Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 2 13:17:32.552099 waagent[2024]: 2026-03-02T13:17:32.552043Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 2 13:17:32.552527 waagent[2024]: 2026-03-02T13:17:32.552408Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 2 13:17:32.552860 waagent[2024]: 2026-03-02T13:17:32.552805Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:17:32.553796 waagent[2024]: 2026-03-02T13:17:32.552928Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:17:32.553796 waagent[2024]: 2026-03-02T13:17:32.553015Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:17:32.553796 waagent[2024]: 2026-03-02T13:17:32.553155Z INFO EnvHandler ExtHandler Configure routes Mar 2 13:17:32.553796 waagent[2024]: 2026-03-02T13:17:32.553214Z INFO EnvHandler ExtHandler Gateway:None Mar 2 13:17:32.553796 waagent[2024]: 2026-03-02T13:17:32.553291Z INFO EnvHandler ExtHandler Routes:None Mar 2 13:17:32.554073 waagent[2024]: 2026-03-02T13:17:32.554030Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:17:32.554384 waagent[2024]: 2026-03-02T13:17:32.554341Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 2 13:17:32.554547 waagent[2024]: 2026-03-02T13:17:32.554509Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 2 13:17:32.554808 waagent[2024]: 2026-03-02T13:17:32.554766Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 2 13:17:32.554808 waagent[2024]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 2 13:17:32.554808 waagent[2024]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 2 13:17:32.554808 waagent[2024]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 2 13:17:32.554808 waagent[2024]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:17:32.554808 waagent[2024]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:17:32.554808 waagent[2024]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:17:32.555331 waagent[2024]: 2026-03-02T13:17:32.555264Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 2 13:17:32.555915 waagent[2024]: 2026-03-02T13:17:32.555864Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 2 13:17:32.556111 waagent[2024]: 2026-03-02T13:17:32.556068Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 2 13:17:32.556195 waagent[2024]: 2026-03-02T13:17:32.556134Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 2 13:17:32.567663 waagent[2024]: 2026-03-02T13:17:32.567607Z INFO ExtHandler ExtHandler Mar 2 13:17:32.567924 waagent[2024]: 2026-03-02T13:17:32.567885Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4cc307ac-8a97-43ba-b600-fd0be51177f9 correlation 0c97c474-6445-4701-a692-ec5035dca37f created: 2026-03-02T13:16:24.476868Z] Mar 2 13:17:32.568448 waagent[2024]: 2026-03-02T13:17:32.568400Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 2 13:17:32.569786 waagent[2024]: 2026-03-02T13:17:32.569051Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 2 13:17:32.598915 waagent[2024]: 2026-03-02T13:17:32.598473Z INFO MonitorHandler ExtHandler Network interfaces: Mar 2 13:17:32.598915 waagent[2024]: Executing ['ip', '-a', '-o', 'link']: Mar 2 13:17:32.598915 waagent[2024]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 2 13:17:32.598915 waagent[2024]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:55:86 brd ff:ff:ff:ff:ff:ff Mar 2 13:17:32.598915 waagent[2024]: 3: enP47096s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:55:86 brd ff:ff:ff:ff:ff:ff\ altname enP47096p0s2 Mar 2 13:17:32.598915 waagent[2024]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 2 13:17:32.598915 waagent[2024]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 2 13:17:32.598915 waagent[2024]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 2 13:17:32.598915 waagent[2024]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 2 13:17:32.598915 waagent[2024]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 2 13:17:32.598915 waagent[2024]: 2: eth0 inet6 fe80::7eed:8dff:fe87:5586/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 2 13:17:32.620050 waagent[2024]: 2026-03-02T13:17:32.619909Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0A7E6443-ECFE-465A-8227-2D690EEAA4D4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 2 13:17:32.668387 waagent[2024]: 2026-03-02T13:17:32.667895Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 2 13:17:32.668387 waagent[2024]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:17:32.668387 waagent[2024]: pkts bytes target prot opt in out source destination Mar 2 13:17:32.668387 waagent[2024]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:17:32.668387 waagent[2024]: pkts bytes target prot opt in out source destination Mar 2 13:17:32.668387 waagent[2024]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:17:32.668387 waagent[2024]: pkts bytes target prot opt in out source destination Mar 2 13:17:32.668387 waagent[2024]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 2 13:17:32.668387 waagent[2024]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 2 13:17:32.668387 waagent[2024]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 2 13:17:32.671105 waagent[2024]: 2026-03-02T13:17:32.671036Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 2 13:17:32.671105 waagent[2024]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:17:32.671105 waagent[2024]: pkts bytes target prot opt in out source destination Mar 2 13:17:32.671105 waagent[2024]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:17:32.671105 waagent[2024]: pkts bytes target prot opt in out source destination Mar 2 13:17:32.671105 waagent[2024]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:17:32.671105 waagent[2024]: pkts bytes target prot opt in out source destination Mar 2 13:17:32.671105 waagent[2024]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 2 13:17:32.671105 waagent[2024]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 2 13:17:32.671105 waagent[2024]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 2 13:17:32.671389 waagent[2024]: 2026-03-02T13:17:32.671356Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 2 13:17:38.364519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:17:38.371709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:17:38.488479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:17:38.490578 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:17:38.573493 kubelet[2266]: E0302 13:17:38.573421 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:17:38.579451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:17:38.579646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:17:40.567575 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:17:40.578661 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:47242.service - OpenSSH per-connection server daemon (10.200.16.10:47242). Mar 2 13:17:41.180012 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 47242 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:17:41.181427 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:41.186367 systemd-logind[1790]: New session 3 of user core. Mar 2 13:17:41.192536 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:17:41.614452 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:47254.service - OpenSSH per-connection server daemon (10.200.16.10:47254). Mar 2 13:17:42.098482 sshd[2279]: Accepted publickey for core from 10.200.16.10 port 47254 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:17:42.099512 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:42.103429 systemd-logind[1790]: New session 4 of user core. Mar 2 13:17:42.109537 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:17:42.445429 sshd[2279]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:42.448906 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:47254.service: Deactivated successfully. Mar 2 13:17:42.452034 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:17:42.452997 systemd-logind[1790]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:17:42.454087 systemd-logind[1790]: Removed session 4. Mar 2 13:17:42.530440 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:47268.service - OpenSSH per-connection server daemon (10.200.16.10:47268). Mar 2 13:17:43.009763 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 47268 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:17:43.010695 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:43.014407 systemd-logind[1790]: New session 5 of user core. Mar 2 13:17:43.021543 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:17:43.354642 sshd[2287]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:43.358289 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:47268.service: Deactivated successfully. Mar 2 13:17:43.361007 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:17:43.361974 systemd-logind[1790]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:17:43.362733 systemd-logind[1790]: Removed session 5. Mar 2 13:17:43.439425 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:47284.service - OpenSSH per-connection server daemon (10.200.16.10:47284). Mar 2 13:17:43.923099 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 47284 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:17:43.923953 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:43.928033 systemd-logind[1790]: New session 6 of user core. Mar 2 13:17:43.939441 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:17:44.274444 sshd[2295]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:44.277872 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:47284.service: Deactivated successfully. Mar 2 13:17:44.281744 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:17:44.282911 systemd-logind[1790]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:17:44.284030 systemd-logind[1790]: Removed session 6. Mar 2 13:17:44.360442 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:47288.service - OpenSSH per-connection server daemon (10.200.16.10:47288). Mar 2 13:17:44.846825 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 47288 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:17:44.847631 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:44.852010 systemd-logind[1790]: New session 7 of user core. Mar 2 13:17:44.858592 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:17:45.268911 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:17:45.269191 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:17:45.281119 sudo[2307]: pam_unix(sudo:session): session closed for user root Mar 2 13:17:45.358065 sshd[2303]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:45.362713 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:47288.service: Deactivated successfully. Mar 2 13:17:45.366195 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:17:45.367106 systemd-logind[1790]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:17:45.368153 systemd-logind[1790]: Removed session 7. Mar 2 13:17:45.451715 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:47296.service - OpenSSH per-connection server daemon (10.200.16.10:47296). Mar 2 13:17:45.932324 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 47296 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:17:45.933770 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:45.937409 systemd-logind[1790]: New session 8 of user core. Mar 2 13:17:45.945462 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:17:46.206680 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:17:46.206948 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:17:46.210418 sudo[2317]: pam_unix(sudo:session): session closed for user root Mar 2 13:17:46.215666 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:17:46.215942 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:17:46.235682 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:17:46.237420 auditctl[2320]: No rules Mar 2 13:17:46.238590 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:17:46.238863 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:17:46.242733 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:17:46.265591 augenrules[2339]: No rules Mar 2 13:17:46.267075 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:17:46.269438 sudo[2316]: pam_unix(sudo:session): session closed for user root Mar 2 13:17:46.346441 sshd[2312]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:46.350006 systemd-logind[1790]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:17:46.350630 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:47296.service: Deactivated successfully. Mar 2 13:17:46.353116 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:17:46.354481 systemd-logind[1790]: Removed session 8. Mar 2 13:17:46.435459 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:47300.service - OpenSSH per-connection server daemon (10.200.16.10:47300). Mar 2 13:17:46.923028 sshd[2348]: Accepted publickey for core from 10.200.16.10 port 47300 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:17:46.924436 sshd[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:46.928367 systemd-logind[1790]: New session 9 of user core. Mar 2 13:17:46.939542 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:17:47.201006 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:17:47.201565 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:17:48.298661 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:17:48.298671 (dockerd)[2368]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:17:48.614620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:17:48.620628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:17:49.145403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:17:49.154537 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:17:49.193442 kubelet[2384]: E0302 13:17:49.193394 2384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:17:49.198529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:17:49.198692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:17:49.207946 dockerd[2368]: time="2026-03-02T13:17:49.207673100Z" level=info msg="Starting up" Mar 2 13:17:49.963206 dockerd[2368]: time="2026-03-02T13:17:49.963151820Z" level=info msg="Loading containers: start." Mar 2 13:17:50.162619 chronyd[1778]: Selected source PHC0 Mar 2 13:17:50.163230 kernel: Initializing XFRM netlink socket Mar 2 13:17:50.317719 systemd-networkd[1401]: docker0: Link UP Mar 2 13:17:50.353605 dockerd[2368]: time="2026-03-02T13:17:50.353438985Z" level=info msg="Loading containers: done." Mar 2 13:17:50.365143 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck910547257-merged.mount: Deactivated successfully. Mar 2 13:17:50.380824 dockerd[2368]: time="2026-03-02T13:17:50.380784977Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:17:50.380922 dockerd[2368]: time="2026-03-02T13:17:50.380903853Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:17:50.381054 dockerd[2368]: time="2026-03-02T13:17:50.381036488Z" level=info msg="Daemon has completed initialization" Mar 2 13:17:50.445410 dockerd[2368]: time="2026-03-02T13:17:50.445355091Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:17:50.445883 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:17:50.866890 containerd[1823]: time="2026-03-02T13:17:50.866582900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 13:17:51.726805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286068212.mount: Deactivated successfully. Mar 2 13:17:53.010258 containerd[1823]: time="2026-03-02T13:17:53.009996017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:53.012887 containerd[1823]: time="2026-03-02T13:17:53.012619855Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 2 13:17:53.017616 containerd[1823]: time="2026-03-02T13:17:53.016316493Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:53.021991 containerd[1823]: time="2026-03-02T13:17:53.021572090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:53.023018 containerd[1823]: time="2026-03-02T13:17:53.022768809Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.15613331s" Mar 2 13:17:53.023018 containerd[1823]: time="2026-03-02T13:17:53.022806369Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 2 13:17:53.023720 containerd[1823]: time="2026-03-02T13:17:53.023688609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 13:17:54.452031 containerd[1823]: time="2026-03-02T13:17:54.451981819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:54.459129 containerd[1823]: time="2026-03-02T13:17:54.458622615Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 2 13:17:54.459129 containerd[1823]: time="2026-03-02T13:17:54.458722255Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:54.464131 containerd[1823]: time="2026-03-02T13:17:54.464088332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:54.465273 containerd[1823]: time="2026-03-02T13:17:54.465240051Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.441515522s" Mar 2 13:17:54.465362 containerd[1823]: time="2026-03-02T13:17:54.465347291Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 2 13:17:54.465934 containerd[1823]: time="2026-03-02T13:17:54.465900291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 13:17:55.915659 containerd[1823]: time="2026-03-02T13:17:55.915602449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:55.918950 containerd[1823]: time="2026-03-02T13:17:55.918917207Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 2 13:17:55.922890 containerd[1823]: time="2026-03-02T13:17:55.922864404Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:55.927461 containerd[1823]: time="2026-03-02T13:17:55.927412642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:55.928489 containerd[1823]: time="2026-03-02T13:17:55.928462601Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.46237639s" Mar 2 13:17:55.928659 containerd[1823]: time="2026-03-02T13:17:55.928577721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 2 13:17:55.929434 containerd[1823]: time="2026-03-02T13:17:55.929113520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 13:17:57.312775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002237311.mount: Deactivated successfully. Mar 2 13:17:57.640324 containerd[1823]: time="2026-03-02T13:17:57.640266679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:57.643023 containerd[1823]: time="2026-03-02T13:17:57.642986397Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 2 13:17:57.646183 containerd[1823]: time="2026-03-02T13:17:57.646134036Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:57.651022 containerd[1823]: time="2026-03-02T13:17:57.650984233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:57.652104 containerd[1823]: time="2026-03-02T13:17:57.651740992Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.722596032s" Mar 2 13:17:57.652104 containerd[1823]: time="2026-03-02T13:17:57.651771632Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 2 13:17:57.652194 containerd[1823]: time="2026-03-02T13:17:57.652158592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 13:17:58.394808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731118249.mount: Deactivated successfully. Mar 2 13:17:59.364508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:17:59.370417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:17:59.498430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:17:59.501374 (kubelet)[2655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:17:59.569227 kubelet[2655]: E0302 13:17:59.569135 2655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:17:59.574406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:17:59.574575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:18:00.284653 containerd[1823]: time="2026-03-02T13:18:00.284597127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:00.288267 containerd[1823]: time="2026-03-02T13:18:00.287997006Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 2 13:18:00.291882 containerd[1823]: time="2026-03-02T13:18:00.291398486Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:00.296963 containerd[1823]: time="2026-03-02T13:18:00.296920245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:00.298239 containerd[1823]: time="2026-03-02T13:18:00.298199925Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.646016373s" Mar 2 13:18:00.298344 containerd[1823]: time="2026-03-02T13:18:00.298327685Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 2 13:18:00.298805 containerd[1823]: time="2026-03-02T13:18:00.298778124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 13:18:00.884274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181909003.mount: Deactivated successfully. Mar 2 13:18:00.905252 containerd[1823]: time="2026-03-02T13:18:00.904885053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:00.908611 containerd[1823]: time="2026-03-02T13:18:00.908435332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 2 13:18:00.912243 containerd[1823]: time="2026-03-02T13:18:00.911477572Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:00.916144 containerd[1823]: time="2026-03-02T13:18:00.916096251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:00.917235 containerd[1823]: time="2026-03-02T13:18:00.916804691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 617.991247ms" Mar 2 13:18:00.917235 containerd[1823]: time="2026-03-02T13:18:00.916839691Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 2 13:18:00.917893 containerd[1823]: time="2026-03-02T13:18:00.917861210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 13:18:01.661995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262185840.mount: Deactivated successfully. Mar 2 13:18:04.518967 containerd[1823]: time="2026-03-02T13:18:04.518915303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:04.522257 containerd[1823]: time="2026-03-02T13:18:04.522211141Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 2 13:18:04.527033 containerd[1823]: time="2026-03-02T13:18:04.526983059Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:04.533803 containerd[1823]: time="2026-03-02T13:18:04.533284415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:04.536772 containerd[1823]: time="2026-03-02T13:18:04.534466454Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 3.616565404s" Mar 2 13:18:04.536772 containerd[1823]: time="2026-03-02T13:18:04.534500974Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 2 13:18:07.849235 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 2 13:18:09.499602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:18:09.517638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:18:09.555390 systemd[1]: Reloading requested from client PID 2768 ('systemctl') (unit session-9.scope)... Mar 2 13:18:09.555572 systemd[1]: Reloading... Mar 2 13:18:09.642245 zram_generator::config[2805]: No configuration found. Mar 2 13:18:09.767153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:18:09.845051 systemd[1]: Reloading finished in 289 ms. Mar 2 13:18:09.895009 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:18:09.898401 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:18:09.898656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:18:09.905487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:18:10.074443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:18:10.080729 (kubelet)[2890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:18:10.113242 kubelet[2890]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:18:10.113242 kubelet[2890]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:18:10.113242 kubelet[2890]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:18:10.113242 kubelet[2890]: I0302 13:18:10.112919 2890 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:18:11.355059 kubelet[2890]: I0302 13:18:11.355015 2890 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:18:11.355059 kubelet[2890]: I0302 13:18:11.355048 2890 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:18:11.355518 kubelet[2890]: I0302 13:18:11.355293 2890 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:18:11.379367 kubelet[2890]: E0302 13:18:11.379316 2890 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:18:11.380410 kubelet[2890]: I0302 13:18:11.380253 2890 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:18:11.385916 kubelet[2890]: E0302 13:18:11.385882 2890 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:18:11.385916 kubelet[2890]: I0302 13:18:11.385914 2890 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:18:11.389946 kubelet[2890]: I0302 13:18:11.389918 2890 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:18:11.392158 kubelet[2890]: I0302 13:18:11.391485 2890 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:18:11.392158 kubelet[2890]: I0302 13:18:11.391538 2890 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-5317e0e64c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 2 13:18:11.392158 kubelet[2890]: I0302 13:18:11.391822 2890 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:18:11.392158 kubelet[2890]: I0302 13:18:11.391833 2890 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:18:11.392158 kubelet[2890]: I0302 13:18:11.391983 2890 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:18:11.397676 kubelet[2890]: I0302 13:18:11.397649 2890 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:18:11.397811 kubelet[2890]: I0302 13:18:11.397800 2890 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:18:11.397885 kubelet[2890]: I0302 13:18:11.397878 2890 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:18:11.397941 kubelet[2890]: I0302 13:18:11.397933 2890 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:18:11.400108 kubelet[2890]: I0302 13:18:11.400075 2890 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:18:11.400673 kubelet[2890]: I0302 13:18:11.400649 2890 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:18:11.400727 kubelet[2890]: W0302 13:18:11.400719 2890 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:18:11.402958 kubelet[2890]: I0302 13:18:11.402937 2890 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:18:11.403041 kubelet[2890]: I0302 13:18:11.402985 2890 server.go:1289] "Started kubelet" Mar 2 13:18:11.403204 kubelet[2890]: E0302 13:18:11.403181 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.101-5317e0e64c&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:18:11.407467 kubelet[2890]: E0302 13:18:11.407443 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:18:11.408960 kubelet[2890]: I0302 13:18:11.408927 2890 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:18:11.410245 kubelet[2890]: I0302 13:18:11.409942 2890 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:18:11.410724 kubelet[2890]: I0302 13:18:11.410694 2890 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:18:11.411547 kubelet[2890]: I0302 13:18:11.411497 2890 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:18:11.411728 kubelet[2890]: I0302 13:18:11.411709 2890 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:18:11.413136 kubelet[2890]: I0302 13:18:11.413105 2890 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:18:11.415828 kubelet[2890]: E0302 13:18:11.414599 2890 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-5317e0e64c.189908b2928d9c16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-5317e0e64c,UID:ci-4081.3.101-5317e0e64c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-5317e0e64c,},FirstTimestamp:2026-03-02 13:18:11.402955798 +0000 UTC m=+1.318926498,LastTimestamp:2026-03-02 13:18:11.402955798 +0000 UTC m=+1.318926498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-5317e0e64c,}" Mar 2 13:18:11.417598 kubelet[2890]: I0302 13:18:11.417416 2890 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:18:11.417598 kubelet[2890]: I0302 13:18:11.417534 2890 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:18:11.417598 kubelet[2890]: I0302 13:18:11.417579 2890 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:18:11.418749 kubelet[2890]: E0302 13:18:11.418061 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:18:11.418749 kubelet[2890]: E0302 13:18:11.418298 2890 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:18:11.418749 kubelet[2890]: I0302 13:18:11.418442 2890 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:18:11.418749 kubelet[2890]: I0302 13:18:11.418525 2890 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:18:11.419328 kubelet[2890]: E0302 13:18:11.419211 2890 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.101-5317e0e64c\" not found" Mar 2 13:18:11.419421 kubelet[2890]: E0302 13:18:11.419399 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-5317e0e64c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Mar 2 13:18:11.419895 kubelet[2890]: I0302 13:18:11.419872 2890 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:18:11.452764 kubelet[2890]: I0302 13:18:11.452725 2890 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:18:11.454235 kubelet[2890]: I0302 13:18:11.454202 2890 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:18:11.454309 kubelet[2890]: I0302 13:18:11.454245 2890 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:18:11.454309 kubelet[2890]: I0302 13:18:11.454266 2890 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:18:11.454309 kubelet[2890]: I0302 13:18:11.454274 2890 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:18:11.454389 kubelet[2890]: E0302 13:18:11.454314 2890 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:18:11.455953 kubelet[2890]: E0302 13:18:11.455927 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:18:11.519749 kubelet[2890]: E0302 13:18:11.519717 2890 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.101-5317e0e64c\" not found" Mar 2 13:18:11.542333 kubelet[2890]: I0302 13:18:11.542305 2890 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:18:11.542768 kubelet[2890]: I0302 13:18:11.542405 2890 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:18:11.542768 kubelet[2890]: I0302 13:18:11.542426 2890 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:18:11.548563 kubelet[2890]: I0302 13:18:11.548546 2890 policy_none.go:49] "None policy: Start" Mar 2 13:18:11.548668 kubelet[2890]: I0302 13:18:11.548658 2890 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:18:11.548723 kubelet[2890]: I0302 13:18:11.548716 2890 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:18:11.552289 update_engine[1796]: I20260302 13:18:11.552241 1796 update_attempter.cc:509] Updating boot flags... Mar 2 13:18:11.554890 kubelet[2890]: E0302 13:18:11.554410 2890 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:18:11.558255 kubelet[2890]: E0302 13:18:11.557670 2890 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:18:11.558255 kubelet[2890]: I0302 13:18:11.557863 2890 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:18:11.558255 kubelet[2890]: I0302 13:18:11.557872 2890 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:18:11.560442 kubelet[2890]: I0302 13:18:11.560426 2890 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:18:11.562442 kubelet[2890]: E0302 13:18:11.562422 2890 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:18:11.562663 kubelet[2890]: E0302 13:18:11.562626 2890 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.101-5317e0e64c\" not found" Mar 2 13:18:11.601255 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2937) Mar 2 13:18:11.620039 kubelet[2890]: E0302 13:18:11.620007 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-5317e0e64c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Mar 2 13:18:11.660877 kubelet[2890]: I0302 13:18:11.660853 2890 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.661757 kubelet[2890]: E0302 13:18:11.661734 2890 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.766417 kubelet[2890]: E0302 13:18:11.765383 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.772758 kubelet[2890]: E0302 13:18:11.772711 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.780285 kubelet[2890]: E0302 13:18:11.780259 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.822293 kubelet[2890]: I0302 13:18:11.822260 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/330e5a30f21e52e64c40cdbac9487974-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-5317e0e64c\" (UID: \"330e5a30f21e52e64c40cdbac9487974\") " pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.822490 kubelet[2890]: I0302 13:18:11.822475 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/330e5a30f21e52e64c40cdbac9487974-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-5317e0e64c\" (UID: \"330e5a30f21e52e64c40cdbac9487974\") " pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.822591 kubelet[2890]: I0302 13:18:11.822580 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.822701 kubelet[2890]: I0302 13:18:11.822689 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.822794 kubelet[2890]: I0302 13:18:11.822784 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.822887 kubelet[2890]: I0302 13:18:11.822875 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.822983 kubelet[2890]: I0302 13:18:11.822971 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e84ea18895f18d2b87de73358f8b1ce8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-5317e0e64c\" (UID: \"e84ea18895f18d2b87de73358f8b1ce8\") " pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.823070 kubelet[2890]: I0302 13:18:11.823059 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/330e5a30f21e52e64c40cdbac9487974-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-5317e0e64c\" (UID: \"330e5a30f21e52e64c40cdbac9487974\") " pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.823168 kubelet[2890]: I0302 13:18:11.823155 2890 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.863796 kubelet[2890]: I0302 13:18:11.863778 2890 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:11.864340 kubelet[2890]: E0302 13:18:11.864317 2890 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:12.021665 kubelet[2890]: E0302 13:18:12.021559 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-5317e0e64c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Mar 2 13:18:12.066483 containerd[1823]: time="2026-03-02T13:18:12.066439752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-5317e0e64c,Uid:330e5a30f21e52e64c40cdbac9487974,Namespace:kube-system,Attempt:0,}" Mar 2 13:18:12.076645 containerd[1823]: time="2026-03-02T13:18:12.076607427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-5317e0e64c,Uid:b94324375943905871686c6b3a5487f2,Namespace:kube-system,Attempt:0,}" Mar 2 13:18:12.081800 containerd[1823]: time="2026-03-02T13:18:12.081769305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-5317e0e64c,Uid:e84ea18895f18d2b87de73358f8b1ce8,Namespace:kube-system,Attempt:0,}" Mar 2 13:18:12.268009 kubelet[2890]: I0302 13:18:12.267984 2890 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:12.268600 kubelet[2890]: E0302 13:18:12.268572 2890 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:12.314051 kubelet[2890]: E0302 13:18:12.313948 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.101-5317e0e64c&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:18:12.443996 kubelet[2890]: E0302 13:18:12.443958 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:18:12.504986 kubelet[2890]: E0302 13:18:12.504947 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:18:12.510679 kubelet[2890]: E0302 13:18:12.510631 2890 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:18:12.755516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234815259.mount: Deactivated successfully. Mar 2 13:18:12.791439 containerd[1823]: time="2026-03-02T13:18:12.790580876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:18:12.803075 containerd[1823]: time="2026-03-02T13:18:12.802840510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 2 13:18:12.809183 containerd[1823]: time="2026-03-02T13:18:12.808458508Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:18:12.811865 containerd[1823]: time="2026-03-02T13:18:12.811835786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:18:12.815596 containerd[1823]: time="2026-03-02T13:18:12.815563344Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:18:12.818481 containerd[1823]: time="2026-03-02T13:18:12.818451383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:18:12.821703 containerd[1823]: time="2026-03-02T13:18:12.821625901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:18:12.823243 kubelet[2890]: E0302 13:18:12.823106 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-5317e0e64c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="1.6s" Mar 2 13:18:12.825541 containerd[1823]: time="2026-03-02T13:18:12.825475979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:18:12.826837 containerd[1823]: time="2026-03-02T13:18:12.826337019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 759.810587ms" Mar 2 13:18:12.831295 containerd[1823]: time="2026-03-02T13:18:12.831252256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 754.565909ms" Mar 2 13:18:12.835083 containerd[1823]: time="2026-03-02T13:18:12.834937535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 753.09115ms" Mar 2 13:18:13.070688 kubelet[2890]: I0302 13:18:13.070583 2890 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:13.071141 kubelet[2890]: E0302 13:18:13.070925 2890 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:13.446563 kubelet[2890]: E0302 13:18:13.446526 2890 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:18:13.643869 containerd[1823]: time="2026-03-02T13:18:13.643291257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:18:13.643869 containerd[1823]: time="2026-03-02T13:18:13.643456897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:18:13.643869 containerd[1823]: time="2026-03-02T13:18:13.643479497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:13.644518 containerd[1823]: time="2026-03-02T13:18:13.644426937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:13.647008 containerd[1823]: time="2026-03-02T13:18:13.646670496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:18:13.647008 containerd[1823]: time="2026-03-02T13:18:13.646723296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:18:13.647008 containerd[1823]: time="2026-03-02T13:18:13.646749376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:13.647008 containerd[1823]: time="2026-03-02T13:18:13.646835496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:13.649709 containerd[1823]: time="2026-03-02T13:18:13.649639294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:18:13.651436 containerd[1823]: time="2026-03-02T13:18:13.651240574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:18:13.651436 containerd[1823]: time="2026-03-02T13:18:13.651259213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:13.651436 containerd[1823]: time="2026-03-02T13:18:13.651366653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:13.716596 containerd[1823]: time="2026-03-02T13:18:13.716037182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-5317e0e64c,Uid:e84ea18895f18d2b87de73358f8b1ce8,Namespace:kube-system,Attempt:0,} returns sandbox id \"596d475aa0e122e7c619cf15f3e84a23a10ec5bfdb3364c9b7989fe4b407039c\"" Mar 2 13:18:13.726341 containerd[1823]: time="2026-03-02T13:18:13.726288337Z" level=info msg="CreateContainer within sandbox \"596d475aa0e122e7c619cf15f3e84a23a10ec5bfdb3364c9b7989fe4b407039c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:18:13.727077 containerd[1823]: time="2026-03-02T13:18:13.726734376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-5317e0e64c,Uid:b94324375943905871686c6b3a5487f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"97390887bc35076451fe782c00d96dab4193b6df22cc28f208653f8970ad8945\"" Mar 2 13:18:13.727316 containerd[1823]: time="2026-03-02T13:18:13.726909136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-5317e0e64c,Uid:330e5a30f21e52e64c40cdbac9487974,Namespace:kube-system,Attempt:0,} returns sandbox id \"06d8af501a5a0180adf6c7b075fa8303b989d0e76c8323a52f7cb9931e8fd5d4\"" Mar 2 13:18:13.739603 containerd[1823]: time="2026-03-02T13:18:13.739493290Z" level=info msg="CreateContainer within sandbox \"97390887bc35076451fe782c00d96dab4193b6df22cc28f208653f8970ad8945\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:18:13.746801 containerd[1823]: time="2026-03-02T13:18:13.746731327Z" level=info msg="CreateContainer within sandbox \"06d8af501a5a0180adf6c7b075fa8303b989d0e76c8323a52f7cb9931e8fd5d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:18:13.790468 containerd[1823]: time="2026-03-02T13:18:13.790302465Z" level=info msg="CreateContainer within sandbox \"596d475aa0e122e7c619cf15f3e84a23a10ec5bfdb3364c9b7989fe4b407039c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ab262fe99ef41cfa1ef4334fc6d6d40f770fde59bc0a2c7119b8c6e553afe76\"" Mar 2 13:18:13.791003 containerd[1823]: time="2026-03-02T13:18:13.790980425Z" level=info msg="StartContainer for \"0ab262fe99ef41cfa1ef4334fc6d6d40f770fde59bc0a2c7119b8c6e553afe76\"" Mar 2 13:18:13.802003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373700310.mount: Deactivated successfully. Mar 2 13:18:13.830662 containerd[1823]: time="2026-03-02T13:18:13.830614845Z" level=info msg="CreateContainer within sandbox \"06d8af501a5a0180adf6c7b075fa8303b989d0e76c8323a52f7cb9931e8fd5d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1703cfeccdf493765087b849745f8dd881cc79993865d36a705ce9c09fee1df1\"" Mar 2 13:18:13.831722 containerd[1823]: time="2026-03-02T13:18:13.831255165Z" level=info msg="StartContainer for \"1703cfeccdf493765087b849745f8dd881cc79993865d36a705ce9c09fee1df1\"" Mar 2 13:18:13.835858 containerd[1823]: time="2026-03-02T13:18:13.835761683Z" level=info msg="CreateContainer within sandbox \"97390887bc35076451fe782c00d96dab4193b6df22cc28f208653f8970ad8945\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ff5a9930df378e9bf3ce7872296d06802e80fa1b21726980286afbc8bd6901f\"" Mar 2 13:18:13.837183 containerd[1823]: time="2026-03-02T13:18:13.836324083Z" level=info msg="StartContainer for \"9ff5a9930df378e9bf3ce7872296d06802e80fa1b21726980286afbc8bd6901f\"" Mar 2 13:18:13.867393 containerd[1823]: time="2026-03-02T13:18:13.867247107Z" level=info msg="StartContainer for \"0ab262fe99ef41cfa1ef4334fc6d6d40f770fde59bc0a2c7119b8c6e553afe76\" returns successfully" Mar 2 13:18:13.951360 containerd[1823]: time="2026-03-02T13:18:13.951315626Z" level=info msg="StartContainer for \"9ff5a9930df378e9bf3ce7872296d06802e80fa1b21726980286afbc8bd6901f\" returns successfully" Mar 2 13:18:13.953680 containerd[1823]: time="2026-03-02T13:18:13.953640905Z" level=info msg="StartContainer for \"1703cfeccdf493765087b849745f8dd881cc79993865d36a705ce9c09fee1df1\" returns successfully" Mar 2 13:18:14.475310 kubelet[2890]: E0302 13:18:14.475273 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:14.475680 kubelet[2890]: E0302 13:18:14.475614 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:14.482594 kubelet[2890]: E0302 13:18:14.482550 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:14.674609 kubelet[2890]: I0302 13:18:14.674579 2890 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.485710 kubelet[2890]: E0302 13:18:15.485673 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.487239 kubelet[2890]: E0302 13:18:15.486090 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.487239 kubelet[2890]: E0302 13:18:15.486427 2890 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.647249 kubelet[2890]: E0302 13:18:15.646886 2890 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.101-5317e0e64c\" not found" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.827910 kubelet[2890]: I0302 13:18:15.827587 2890 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.920315 kubelet[2890]: I0302 13:18:15.920276 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.939541 kubelet[2890]: E0302 13:18:15.939344 2890 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.939541 kubelet[2890]: I0302 13:18:15.939386 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.944330 kubelet[2890]: E0302 13:18:15.944100 2890 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-5317e0e64c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.944330 kubelet[2890]: I0302 13:18:15.944128 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:15.954119 kubelet[2890]: E0302 13:18:15.953401 2890 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-5317e0e64c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:16.409834 kubelet[2890]: I0302 13:18:16.409404 2890 apiserver.go:52] "Watching apiserver" Mar 2 13:18:16.417815 kubelet[2890]: I0302 13:18:16.417787 2890 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:18:16.581685 kubelet[2890]: I0302 13:18:16.580931 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:16.591168 kubelet[2890]: I0302 13:18:16.590253 2890 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:18:17.439716 kubelet[2890]: I0302 13:18:17.439681 2890 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:17.451060 kubelet[2890]: I0302 13:18:17.451029 2890 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:18:18.102375 systemd[1]: Reloading requested from client PID 3217 ('systemctl') (unit session-9.scope)... Mar 2 13:18:18.102391 systemd[1]: Reloading... Mar 2 13:18:18.197256 zram_generator::config[3257]: No configuration found. Mar 2 13:18:18.318616 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:18:18.406086 systemd[1]: Reloading finished in 302 ms. Mar 2 13:18:18.435761 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:18:18.454166 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:18:18.454520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:18:18.460714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:18:18.650367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:18:18.650643 (kubelet)[3331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:18:18.693569 kubelet[3331]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:18:18.693569 kubelet[3331]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:18:18.693569 kubelet[3331]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:18:18.693569 kubelet[3331]: I0302 13:18:18.692074 3331 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:18:18.702657 kubelet[3331]: I0302 13:18:18.702555 3331 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:18:18.702791 kubelet[3331]: I0302 13:18:18.702782 3331 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:18:18.703305 kubelet[3331]: I0302 13:18:18.703289 3331 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:18:18.705196 kubelet[3331]: I0302 13:18:18.705172 3331 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:18:18.708823 kubelet[3331]: I0302 13:18:18.708799 3331 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:18:18.714403 kubelet[3331]: E0302 13:18:18.714358 3331 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:18:18.714519 kubelet[3331]: I0302 13:18:18.714412 3331 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:18:18.718165 kubelet[3331]: I0302 13:18:18.718141 3331 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:18:18.718591 kubelet[3331]: I0302 13:18:18.718562 3331 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:18:18.718736 kubelet[3331]: I0302 13:18:18.718590 3331 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-5317e0e64c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 2 13:18:18.718849 kubelet[3331]: I0302 13:18:18.718741 3331 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:18:18.718849 kubelet[3331]: I0302 13:18:18.718751 3331 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:18:18.718849 kubelet[3331]: I0302 13:18:18.718805 3331 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:18:18.718933 kubelet[3331]: I0302 13:18:18.718917 3331 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:18:18.718933 kubelet[3331]: I0302 13:18:18.718931 3331 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:18:18.718989 kubelet[3331]: I0302 13:18:18.718956 3331 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:18:18.718989 kubelet[3331]: I0302 13:18:18.718969 3331 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:18:18.722924 kubelet[3331]: I0302 13:18:18.722880 3331 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:18:18.723827 kubelet[3331]: I0302 13:18:18.723810 3331 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:18:18.728250 kubelet[3331]: I0302 13:18:18.726635 3331 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:18:18.728250 kubelet[3331]: I0302 13:18:18.726671 3331 server.go:1289] "Started kubelet" Mar 2 13:18:18.728428 kubelet[3331]: I0302 13:18:18.728412 3331 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:18:18.739876 kubelet[3331]: I0302 13:18:18.739819 3331 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:18:18.740702 kubelet[3331]: I0302 13:18:18.740675 3331 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:18:18.742874 kubelet[3331]: I0302 13:18:18.742771 3331 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:18:18.743227 kubelet[3331]: I0302 13:18:18.743157 3331 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:18:18.743486 kubelet[3331]: I0302 13:18:18.743418 3331 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:18:18.743563 kubelet[3331]: I0302 13:18:18.743553 3331 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:18:18.743672 kubelet[3331]: E0302 13:18:18.743655 3331 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:18:18.745337 kubelet[3331]: I0302 13:18:18.745270 3331 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:18:18.750854 kubelet[3331]: I0302 13:18:18.750797 3331 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:18:18.751038 kubelet[3331]: I0302 13:18:18.751018 3331 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:18:18.751545 kubelet[3331]: I0302 13:18:18.751515 3331 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:18:18.752532 kubelet[3331]: I0302 13:18:18.752512 3331 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:18:18.752730 kubelet[3331]: E0302 13:18:18.752707 3331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.101-5317e0e64c\" not found" Mar 2 13:18:18.762011 kubelet[3331]: I0302 13:18:18.761983 3331 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:18:18.762284 kubelet[3331]: I0302 13:18:18.762265 3331 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:18:18.764766 kubelet[3331]: I0302 13:18:18.764744 3331 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:18:18.765047 kubelet[3331]: I0302 13:18:18.765029 3331 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:18:18.783871 kubelet[3331]: E0302 13:18:18.783844 3331 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:18:18.784267 kubelet[3331]: I0302 13:18:18.784252 3331 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:18:18.843826 kubelet[3331]: E0302 13:18:18.843795 3331 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:18:18.845198 kubelet[3331]: I0302 13:18:18.845178 3331 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:18:18.845478 kubelet[3331]: I0302 13:18:18.845460 3331 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:18:18.845594 kubelet[3331]: I0302 13:18:18.845585 3331 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:18:18.845793 kubelet[3331]: I0302 13:18:18.845781 3331 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:18:18.845862 kubelet[3331]: I0302 13:18:18.845841 3331 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:18:18.846088 kubelet[3331]: I0302 13:18:18.845904 3331 policy_none.go:49] "None policy: Start" Mar 2 13:18:18.846088 kubelet[3331]: I0302 13:18:18.845918 3331 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:18:18.846088 kubelet[3331]: I0302 13:18:18.845928 3331 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:18:18.846088 kubelet[3331]: I0302 13:18:18.846024 3331 state_mem.go:75] "Updated machine memory state" Mar 2 13:18:18.847338 kubelet[3331]: E0302 13:18:18.847322 3331 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:18:18.847604 kubelet[3331]: I0302 13:18:18.847589 3331 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:18:18.847711 kubelet[3331]: I0302 13:18:18.847679 3331 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:18:18.849020 kubelet[3331]: I0302 13:18:18.848479 3331 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:18:18.849589 kubelet[3331]: E0302 13:18:18.849569 3331 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:18:18.950517 kubelet[3331]: I0302 13:18:18.950419 3331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:18.962267 kubelet[3331]: I0302 13:18:18.962234 3331 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:18.962425 kubelet[3331]: I0302 13:18:18.962319 3331 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.045202 kubelet[3331]: I0302 13:18:19.044605 3331 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.045202 kubelet[3331]: I0302 13:18:19.044657 3331 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.045202 kubelet[3331]: I0302 13:18:19.044894 3331 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.058786 kubelet[3331]: I0302 13:18:19.058601 3331 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:18:19.059702 kubelet[3331]: I0302 13:18:19.059573 3331 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:18:19.059702 kubelet[3331]: E0302 13:18:19.059631 3331 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-5317e0e64c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.060008 kubelet[3331]: I0302 13:18:19.059852 3331 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:18:19.060008 kubelet[3331]: E0302 13:18:19.059879 3331 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-5317e0e64c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066378 kubelet[3331]: I0302 13:18:19.066357 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/330e5a30f21e52e64c40cdbac9487974-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-5317e0e64c\" (UID: \"330e5a30f21e52e64c40cdbac9487974\") " pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066639 kubelet[3331]: I0302 13:18:19.066456 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066639 kubelet[3331]: I0302 13:18:19.066479 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066639 kubelet[3331]: I0302 13:18:19.066498 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066639 kubelet[3331]: I0302 13:18:19.066515 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e84ea18895f18d2b87de73358f8b1ce8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-5317e0e64c\" (UID: \"e84ea18895f18d2b87de73358f8b1ce8\") " pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066639 kubelet[3331]: I0302 13:18:19.066531 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/330e5a30f21e52e64c40cdbac9487974-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-5317e0e64c\" (UID: \"330e5a30f21e52e64c40cdbac9487974\") " pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066781 kubelet[3331]: I0302 13:18:19.066545 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066781 kubelet[3331]: I0302 13:18:19.066561 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b94324375943905871686c6b3a5487f2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-5317e0e64c\" (UID: \"b94324375943905871686c6b3a5487f2\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.066781 kubelet[3331]: I0302 13:18:19.066577 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/330e5a30f21e52e64c40cdbac9487974-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-5317e0e64c\" (UID: \"330e5a30f21e52e64c40cdbac9487974\") " pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.721977 kubelet[3331]: I0302 13:18:19.721887 3331 apiserver.go:52] "Watching apiserver" Mar 2 13:18:19.766472 kubelet[3331]: I0302 13:18:19.765104 3331 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:18:19.817235 kubelet[3331]: I0302 13:18:19.816601 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" podStartSLOduration=2.816583529 podStartE2EDuration="2.816583529s" podCreationTimestamp="2026-03-02 13:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:18:19.800014855 +0000 UTC m=+1.146303965" watchObservedRunningTime="2026-03-02 13:18:19.816583529 +0000 UTC m=+1.162872639" Mar 2 13:18:19.818077 kubelet[3331]: I0302 13:18:19.817624 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" podStartSLOduration=3.8170796080000002 podStartE2EDuration="3.817079608s" podCreationTimestamp="2026-03-02 13:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:18:19.816025049 +0000 UTC m=+1.162314159" watchObservedRunningTime="2026-03-02 13:18:19.817079608 +0000 UTC m=+1.163368718" Mar 2 13:18:19.818456 kubelet[3331]: I0302 13:18:19.818435 3331 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.819578 kubelet[3331]: I0302 13:18:19.819542 3331 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.830843 kubelet[3331]: I0302 13:18:19.830576 3331 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:18:19.830843 kubelet[3331]: E0302 13:18:19.830645 3331 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-5317e0e64c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.838823 kubelet[3331]: I0302 13:18:19.837362 3331 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:18:19.839329 kubelet[3331]: E0302 13:18:19.839071 3331 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-5317e0e64c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-5317e0e64c" Mar 2 13:18:19.851795 kubelet[3331]: I0302 13:18:19.851746 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.101-5317e0e64c" podStartSLOduration=0.851721675 podStartE2EDuration="851.721675ms" podCreationTimestamp="2026-03-02 13:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:18:19.837759761 +0000 UTC m=+1.184048831" watchObservedRunningTime="2026-03-02 13:18:19.851721675 +0000 UTC m=+1.198010785" Mar 2 13:18:24.324371 kubelet[3331]: I0302 13:18:24.324338 3331 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:18:24.325415 kubelet[3331]: I0302 13:18:24.324779 3331 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:18:24.325482 containerd[1823]: time="2026-03-02T13:18:24.324610175Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:18:25.408617 kubelet[3331]: I0302 13:18:25.408567 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f92ff72-d0bc-40ae-946c-43c439f51a9f-kube-proxy\") pod \"kube-proxy-bbcpb\" (UID: \"4f92ff72-d0bc-40ae-946c-43c439f51a9f\") " pod="kube-system/kube-proxy-bbcpb" Mar 2 13:18:25.409376 kubelet[3331]: I0302 13:18:25.408695 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f92ff72-d0bc-40ae-946c-43c439f51a9f-xtables-lock\") pod \"kube-proxy-bbcpb\" (UID: \"4f92ff72-d0bc-40ae-946c-43c439f51a9f\") " pod="kube-system/kube-proxy-bbcpb" Mar 2 13:18:25.409376 kubelet[3331]: I0302 13:18:25.408722 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f92ff72-d0bc-40ae-946c-43c439f51a9f-lib-modules\") pod \"kube-proxy-bbcpb\" (UID: \"4f92ff72-d0bc-40ae-946c-43c439f51a9f\") " pod="kube-system/kube-proxy-bbcpb" Mar 2 13:18:25.409376 kubelet[3331]: I0302 13:18:25.408741 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvj8v\" (UniqueName: \"kubernetes.io/projected/4f92ff72-d0bc-40ae-946c-43c439f51a9f-kube-api-access-qvj8v\") pod \"kube-proxy-bbcpb\" (UID: \"4f92ff72-d0bc-40ae-946c-43c439f51a9f\") " pod="kube-system/kube-proxy-bbcpb" Mar 2 13:18:25.610539 kubelet[3331]: I0302 13:18:25.610495 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7047a4f2-1507-490e-bf2f-d19e3d86a420-var-lib-calico\") pod \"tigera-operator-7d4578d8d-z9zxn\" (UID: \"7047a4f2-1507-490e-bf2f-d19e3d86a420\") " pod="tigera-operator/tigera-operator-7d4578d8d-z9zxn" Mar 2 13:18:25.610539 kubelet[3331]: I0302 13:18:25.610544 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgch\" (UniqueName: \"kubernetes.io/projected/7047a4f2-1507-490e-bf2f-d19e3d86a420-kube-api-access-fbgch\") pod \"tigera-operator-7d4578d8d-z9zxn\" (UID: \"7047a4f2-1507-490e-bf2f-d19e3d86a420\") " pod="tigera-operator/tigera-operator-7d4578d8d-z9zxn" Mar 2 13:18:25.704474 containerd[1823]: time="2026-03-02T13:18:25.703881373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbcpb,Uid:4f92ff72-d0bc-40ae-946c-43c439f51a9f,Namespace:kube-system,Attempt:0,}" Mar 2 13:18:25.746873 containerd[1823]: time="2026-03-02T13:18:25.746469074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:18:25.746873 containerd[1823]: time="2026-03-02T13:18:25.746699194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:18:25.746873 containerd[1823]: time="2026-03-02T13:18:25.746722874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:25.746873 containerd[1823]: time="2026-03-02T13:18:25.746828314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:25.786270 containerd[1823]: time="2026-03-02T13:18:25.786102697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbcpb,Uid:4f92ff72-d0bc-40ae-946c-43c439f51a9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7455dab4d9ddf20a4f7592991ecf83f1c431e94d5dc2095226722bef6e78011f\"" Mar 2 13:18:25.798062 containerd[1823]: time="2026-03-02T13:18:25.798019731Z" level=info msg="CreateContainer within sandbox \"7455dab4d9ddf20a4f7592991ecf83f1c431e94d5dc2095226722bef6e78011f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:18:25.834347 containerd[1823]: time="2026-03-02T13:18:25.834210716Z" level=info msg="CreateContainer within sandbox \"7455dab4d9ddf20a4f7592991ecf83f1c431e94d5dc2095226722bef6e78011f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b0685a277c88ff757227cf57695166ed4da4bc8c8ad39a95badf6c09de4c830b\"" Mar 2 13:18:25.836689 containerd[1823]: time="2026-03-02T13:18:25.835201795Z" level=info msg="StartContainer for \"b0685a277c88ff757227cf57695166ed4da4bc8c8ad39a95badf6c09de4c830b\"" Mar 2 13:18:25.887819 containerd[1823]: time="2026-03-02T13:18:25.887768092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-z9zxn,Uid:7047a4f2-1507-490e-bf2f-d19e3d86a420,Namespace:tigera-operator,Attempt:0,}" Mar 2 13:18:25.891126 containerd[1823]: time="2026-03-02T13:18:25.890908131Z" level=info msg="StartContainer for \"b0685a277c88ff757227cf57695166ed4da4bc8c8ad39a95badf6c09de4c830b\" returns successfully" Mar 2 13:18:25.938323 containerd[1823]: time="2026-03-02T13:18:25.938047670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:18:25.938323 containerd[1823]: time="2026-03-02T13:18:25.938106270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:18:25.938323 containerd[1823]: time="2026-03-02T13:18:25.938121870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:25.938323 containerd[1823]: time="2026-03-02T13:18:25.938225630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:25.987271 containerd[1823]: time="2026-03-02T13:18:25.986415849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-z9zxn,Uid:7047a4f2-1507-490e-bf2f-d19e3d86a420,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fa28ea54147090b8b9a60d6b286db26fe94edaea730840d1277b19ab4d82c647\"" Mar 2 13:18:25.988944 containerd[1823]: time="2026-03-02T13:18:25.988883088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 13:18:27.657320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529746892.mount: Deactivated successfully. Mar 2 13:18:29.532317 kubelet[3331]: I0302 13:18:29.531189 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bbcpb" podStartSLOduration=4.531171942 podStartE2EDuration="4.531171942s" podCreationTimestamp="2026-03-02 13:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:18:26.848603113 +0000 UTC m=+8.194892223" watchObservedRunningTime="2026-03-02 13:18:29.531171942 +0000 UTC m=+10.877461052" Mar 2 13:18:32.507365 containerd[1823]: time="2026-03-02T13:18:32.506547896Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:32.513940 containerd[1823]: time="2026-03-02T13:18:32.513626852Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=25060789" Mar 2 13:18:32.518574 containerd[1823]: time="2026-03-02T13:18:32.518512009Z" level=info msg="ImageCreate event name:\"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:32.523984 containerd[1823]: time="2026-03-02T13:18:32.523879046Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:32.525537 containerd[1823]: time="2026-03-02T13:18:32.525483205Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"25056784\" in 6.536560877s" Mar 2 13:18:32.525537 containerd[1823]: time="2026-03-02T13:18:32.525533005Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\"" Mar 2 13:18:32.535000 containerd[1823]: time="2026-03-02T13:18:32.534953360Z" level=info msg="CreateContainer within sandbox \"fa28ea54147090b8b9a60d6b286db26fe94edaea730840d1277b19ab4d82c647\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 13:18:32.557716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073993806.mount: Deactivated successfully. Mar 2 13:18:32.569847 containerd[1823]: time="2026-03-02T13:18:32.569796301Z" level=info msg="CreateContainer within sandbox \"fa28ea54147090b8b9a60d6b286db26fe94edaea730840d1277b19ab4d82c647\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0c23595ba67d9745026a1fbd71f5e1676ea1c3a0f50b9f434bc1a62336ab4b6\"" Mar 2 13:18:32.570561 containerd[1823]: time="2026-03-02T13:18:32.570530340Z" level=info msg="StartContainer for \"a0c23595ba67d9745026a1fbd71f5e1676ea1c3a0f50b9f434bc1a62336ab4b6\"" Mar 2 13:18:32.623582 containerd[1823]: time="2026-03-02T13:18:32.623529071Z" level=info msg="StartContainer for \"a0c23595ba67d9745026a1fbd71f5e1676ea1c3a0f50b9f434bc1a62336ab4b6\" returns successfully" Mar 2 13:18:38.671574 sudo[2352]: pam_unix(sudo:session): session closed for user root Mar 2 13:18:38.749561 sshd[2348]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:38.760938 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:47300.service: Deactivated successfully. Mar 2 13:18:38.762414 systemd-logind[1790]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:18:38.766377 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:18:38.767760 systemd-logind[1790]: Removed session 9. Mar 2 13:18:47.638086 kubelet[3331]: I0302 13:18:47.637399 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d4578d8d-z9zxn" podStartSLOduration=16.098240088 podStartE2EDuration="22.637329644s" podCreationTimestamp="2026-03-02 13:18:25 +0000 UTC" firstStartedPulling="2026-03-02 13:18:25.988341968 +0000 UTC m=+7.334631038" lastFinishedPulling="2026-03-02 13:18:32.527431484 +0000 UTC m=+13.873720594" observedRunningTime="2026-03-02 13:18:32.853362664 +0000 UTC m=+14.199651774" watchObservedRunningTime="2026-03-02 13:18:47.637329644 +0000 UTC m=+28.983618714" Mar 2 13:18:47.755051 kubelet[3331]: I0302 13:18:47.755005 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d42034a-6767-4301-b694-a2bcb981993d-tigera-ca-bundle\") pod \"calico-typha-59b849b855-xfp8h\" (UID: \"5d42034a-6767-4301-b694-a2bcb981993d\") " pod="calico-system/calico-typha-59b849b855-xfp8h" Mar 2 13:18:47.755051 kubelet[3331]: I0302 13:18:47.755049 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5d42034a-6767-4301-b694-a2bcb981993d-typha-certs\") pod \"calico-typha-59b849b855-xfp8h\" (UID: \"5d42034a-6767-4301-b694-a2bcb981993d\") " pod="calico-system/calico-typha-59b849b855-xfp8h" Mar 2 13:18:47.757278 kubelet[3331]: I0302 13:18:47.755072 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xksd\" (UniqueName: \"kubernetes.io/projected/5d42034a-6767-4301-b694-a2bcb981993d-kube-api-access-8xksd\") pod \"calico-typha-59b849b855-xfp8h\" (UID: \"5d42034a-6767-4301-b694-a2bcb981993d\") " pod="calico-system/calico-typha-59b849b855-xfp8h" Mar 2 13:18:47.856239 kubelet[3331]: I0302 13:18:47.855497 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-tigera-ca-bundle\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856239 kubelet[3331]: I0302 13:18:47.855543 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbm8\" (UniqueName: \"kubernetes.io/projected/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-kube-api-access-nxbm8\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856239 kubelet[3331]: I0302 13:18:47.855563 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-flexvol-driver-host\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856239 kubelet[3331]: I0302 13:18:47.855603 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-policysync\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856239 kubelet[3331]: I0302 13:18:47.855640 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-sys-fs\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856476 kubelet[3331]: I0302 13:18:47.855657 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-cni-net-dir\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856476 kubelet[3331]: I0302 13:18:47.855673 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-cni-log-dir\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856476 kubelet[3331]: I0302 13:18:47.855691 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-cni-bin-dir\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856476 kubelet[3331]: I0302 13:18:47.855705 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-var-run-calico\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856476 kubelet[3331]: I0302 13:18:47.855719 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-xtables-lock\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856583 kubelet[3331]: I0302 13:18:47.855735 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-lib-modules\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856583 kubelet[3331]: I0302 13:18:47.855749 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-node-certs\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856583 kubelet[3331]: I0302 13:18:47.855763 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-var-lib-calico\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856583 kubelet[3331]: I0302 13:18:47.855793 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-bpffs\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.856583 kubelet[3331]: I0302 13:18:47.855830 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ebda9537-1da7-42e0-afa9-3c72f9d9cd33-nodeproc\") pod \"calico-node-9wr9t\" (UID: \"ebda9537-1da7-42e0-afa9-3c72f9d9cd33\") " pod="calico-system/calico-node-9wr9t" Mar 2 13:18:47.913830 kubelet[3331]: E0302 13:18:47.912757 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:18:47.956201 containerd[1823]: time="2026-03-02T13:18:47.956153477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b849b855-xfp8h,Uid:5d42034a-6767-4301-b694-a2bcb981993d,Namespace:calico-system,Attempt:0,}" Mar 2 13:18:47.958126 kubelet[3331]: I0302 13:18:47.957482 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d18edb38-1e4f-4532-a511-77bd2348b866-socket-dir\") pod \"csi-node-driver-j7z7n\" (UID: \"d18edb38-1e4f-4532-a511-77bd2348b866\") " pod="calico-system/csi-node-driver-j7z7n" Mar 2 13:18:47.958126 kubelet[3331]: I0302 13:18:47.957535 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d18edb38-1e4f-4532-a511-77bd2348b866-registration-dir\") pod \"csi-node-driver-j7z7n\" (UID: \"d18edb38-1e4f-4532-a511-77bd2348b866\") " pod="calico-system/csi-node-driver-j7z7n" Mar 2 13:18:47.958126 kubelet[3331]: I0302 13:18:47.957577 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d18edb38-1e4f-4532-a511-77bd2348b866-kubelet-dir\") pod \"csi-node-driver-j7z7n\" (UID: \"d18edb38-1e4f-4532-a511-77bd2348b866\") " pod="calico-system/csi-node-driver-j7z7n" Mar 2 13:18:47.958126 kubelet[3331]: I0302 13:18:47.957592 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d18edb38-1e4f-4532-a511-77bd2348b866-varrun\") pod \"csi-node-driver-j7z7n\" (UID: \"d18edb38-1e4f-4532-a511-77bd2348b866\") " pod="calico-system/csi-node-driver-j7z7n" Mar 2 13:18:47.958126 kubelet[3331]: I0302 13:18:47.957606 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462nm\" (UniqueName: \"kubernetes.io/projected/d18edb38-1e4f-4532-a511-77bd2348b866-kube-api-access-462nm\") pod \"csi-node-driver-j7z7n\" (UID: \"d18edb38-1e4f-4532-a511-77bd2348b866\") " pod="calico-system/csi-node-driver-j7z7n" Mar 2 13:18:47.958745 kubelet[3331]: E0302 13:18:47.958611 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.958745 kubelet[3331]: W0302 13:18:47.958632 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.958745 kubelet[3331]: E0302 13:18:47.958668 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.959097 kubelet[3331]: E0302 13:18:47.959003 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.959097 kubelet[3331]: W0302 13:18:47.959024 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.959097 kubelet[3331]: E0302 13:18:47.959038 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.959431 kubelet[3331]: E0302 13:18:47.959385 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.959431 kubelet[3331]: W0302 13:18:47.959399 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.959431 kubelet[3331]: E0302 13:18:47.959411 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.959825 kubelet[3331]: E0302 13:18:47.959720 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.959825 kubelet[3331]: W0302 13:18:47.959743 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.959825 kubelet[3331]: E0302 13:18:47.959755 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.960163 kubelet[3331]: E0302 13:18:47.960099 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.960163 kubelet[3331]: W0302 13:18:47.960113 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.960163 kubelet[3331]: E0302 13:18:47.960124 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.962236 kubelet[3331]: E0302 13:18:47.960594 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.962236 kubelet[3331]: W0302 13:18:47.960608 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.962236 kubelet[3331]: E0302 13:18:47.960619 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.963101 kubelet[3331]: E0302 13:18:47.963085 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.963198 kubelet[3331]: W0302 13:18:47.963185 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.963275 kubelet[3331]: E0302 13:18:47.963264 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.965009 kubelet[3331]: E0302 13:18:47.964990 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.965109 kubelet[3331]: W0302 13:18:47.965097 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.965180 kubelet[3331]: E0302 13:18:47.965170 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.966520 kubelet[3331]: E0302 13:18:47.966503 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.966627 kubelet[3331]: W0302 13:18:47.966615 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.966695 kubelet[3331]: E0302 13:18:47.966684 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.967681 kubelet[3331]: E0302 13:18:47.967664 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.969702 kubelet[3331]: W0302 13:18:47.968337 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.969702 kubelet[3331]: E0302 13:18:47.968361 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.969702 kubelet[3331]: E0302 13:18:47.968747 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.969702 kubelet[3331]: W0302 13:18:47.968760 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.969702 kubelet[3331]: E0302 13:18:47.968772 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.971425 kubelet[3331]: E0302 13:18:47.971409 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.971531 kubelet[3331]: W0302 13:18:47.971518 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.971588 kubelet[3331]: E0302 13:18:47.971573 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:47.992285 kubelet[3331]: E0302 13:18:47.992183 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:47.992859 kubelet[3331]: W0302 13:18:47.992697 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:47.992859 kubelet[3331]: E0302 13:18:47.992732 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.015660 containerd[1823]: time="2026-03-02T13:18:48.015408373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:18:48.015660 containerd[1823]: time="2026-03-02T13:18:48.015457253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:18:48.015660 containerd[1823]: time="2026-03-02T13:18:48.015472453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:48.015660 containerd[1823]: time="2026-03-02T13:18:48.015550773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:48.053615 containerd[1823]: time="2026-03-02T13:18:48.053208158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9wr9t,Uid:ebda9537-1da7-42e0-afa9-3c72f9d9cd33,Namespace:calico-system,Attempt:0,}" Mar 2 13:18:48.054545 containerd[1823]: time="2026-03-02T13:18:48.054517637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59b849b855-xfp8h,Uid:5d42034a-6767-4301-b694-a2bcb981993d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c70e78a6ee5913739ed6427a79d7f59256ee3e6c0fe24544913ba814bfa806ed\"" Mar 2 13:18:48.057087 containerd[1823]: time="2026-03-02T13:18:48.056606356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 13:18:48.060209 kubelet[3331]: E0302 13:18:48.060189 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.060209 kubelet[3331]: W0302 13:18:48.060235 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.060209 kubelet[3331]: E0302 13:18:48.060255 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.061376 kubelet[3331]: E0302 13:18:48.061246 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.061376 kubelet[3331]: W0302 13:18:48.061261 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.061376 kubelet[3331]: E0302 13:18:48.061273 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.061656 kubelet[3331]: E0302 13:18:48.061635 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.061656 kubelet[3331]: W0302 13:18:48.061649 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.061722 kubelet[3331]: E0302 13:18:48.061660 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.061880 kubelet[3331]: E0302 13:18:48.061862 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.061880 kubelet[3331]: W0302 13:18:48.061875 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.061945 kubelet[3331]: E0302 13:18:48.061885 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.062151 kubelet[3331]: E0302 13:18:48.062134 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.062151 kubelet[3331]: W0302 13:18:48.062147 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.062232 kubelet[3331]: E0302 13:18:48.062161 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.062431 kubelet[3331]: E0302 13:18:48.062414 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.062431 kubelet[3331]: W0302 13:18:48.062427 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.062500 kubelet[3331]: E0302 13:18:48.062438 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.062675 kubelet[3331]: E0302 13:18:48.062658 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.062675 kubelet[3331]: W0302 13:18:48.062671 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.062735 kubelet[3331]: E0302 13:18:48.062681 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.062899 kubelet[3331]: E0302 13:18:48.062883 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.062899 kubelet[3331]: W0302 13:18:48.062896 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.062965 kubelet[3331]: E0302 13:18:48.062906 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.063179 kubelet[3331]: E0302 13:18:48.063162 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.063179 kubelet[3331]: W0302 13:18:48.063175 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.063284 kubelet[3331]: E0302 13:18:48.063185 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.063704 kubelet[3331]: E0302 13:18:48.063657 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.063704 kubelet[3331]: W0302 13:18:48.063672 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.063704 kubelet[3331]: E0302 13:18:48.063700 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.064421 kubelet[3331]: E0302 13:18:48.064401 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.064528 kubelet[3331]: W0302 13:18:48.064424 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.064528 kubelet[3331]: E0302 13:18:48.064437 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.064669 kubelet[3331]: E0302 13:18:48.064650 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.064669 kubelet[3331]: W0302 13:18:48.064666 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.064744 kubelet[3331]: E0302 13:18:48.064679 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.064895 kubelet[3331]: E0302 13:18:48.064879 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.064895 kubelet[3331]: W0302 13:18:48.064891 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.064960 kubelet[3331]: E0302 13:18:48.064901 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.065078 kubelet[3331]: E0302 13:18:48.065064 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.065078 kubelet[3331]: W0302 13:18:48.065076 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.065131 kubelet[3331]: E0302 13:18:48.065085 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.065338 kubelet[3331]: E0302 13:18:48.065321 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.065338 kubelet[3331]: W0302 13:18:48.065335 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.065410 kubelet[3331]: E0302 13:18:48.065345 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.065620 kubelet[3331]: E0302 13:18:48.065599 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.065620 kubelet[3331]: W0302 13:18:48.065615 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.065694 kubelet[3331]: E0302 13:18:48.065628 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.065877 kubelet[3331]: E0302 13:18:48.065859 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.065877 kubelet[3331]: W0302 13:18:48.065872 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.065957 kubelet[3331]: E0302 13:18:48.065881 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.066089 kubelet[3331]: E0302 13:18:48.066072 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.066089 kubelet[3331]: W0302 13:18:48.066083 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.066151 kubelet[3331]: E0302 13:18:48.066091 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.066302 kubelet[3331]: E0302 13:18:48.066286 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.066302 kubelet[3331]: W0302 13:18:48.066297 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.068482 kubelet[3331]: E0302 13:18:48.066305 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.068482 kubelet[3331]: E0302 13:18:48.066458 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.068482 kubelet[3331]: W0302 13:18:48.066465 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.068482 kubelet[3331]: E0302 13:18:48.066472 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.068482 kubelet[3331]: E0302 13:18:48.066613 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.068482 kubelet[3331]: W0302 13:18:48.066620 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.068482 kubelet[3331]: E0302 13:18:48.066628 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.068482 kubelet[3331]: E0302 13:18:48.066798 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.068482 kubelet[3331]: W0302 13:18:48.066807 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.068482 kubelet[3331]: E0302 13:18:48.066814 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.068726 kubelet[3331]: E0302 13:18:48.067301 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.068726 kubelet[3331]: W0302 13:18:48.067314 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.068726 kubelet[3331]: E0302 13:18:48.067326 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.069157 kubelet[3331]: E0302 13:18:48.069130 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.069338 kubelet[3331]: W0302 13:18:48.069208 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.069338 kubelet[3331]: E0302 13:18:48.069236 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.069338 kubelet[3331]: E0302 13:18:48.069530 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.069338 kubelet[3331]: W0302 13:18:48.069540 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.069338 kubelet[3331]: E0302 13:18:48.069551 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.077822 kubelet[3331]: E0302 13:18:48.077789 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:48.077822 kubelet[3331]: W0302 13:18:48.077814 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:48.077822 kubelet[3331]: E0302 13:18:48.077829 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:48.110635 containerd[1823]: time="2026-03-02T13:18:48.110554615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:18:48.110635 containerd[1823]: time="2026-03-02T13:18:48.110602775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:18:48.110635 containerd[1823]: time="2026-03-02T13:18:48.110613815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:48.110869 containerd[1823]: time="2026-03-02T13:18:48.110686535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:18:48.141105 containerd[1823]: time="2026-03-02T13:18:48.140994323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9wr9t,Uid:ebda9537-1da7-42e0-afa9-3c72f9d9cd33,Namespace:calico-system,Attempt:0,} returns sandbox id \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\"" Mar 2 13:18:49.658152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590673595.mount: Deactivated successfully. Mar 2 13:18:49.744474 kubelet[3331]: E0302 13:18:49.744413 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:18:50.453160 containerd[1823]: time="2026-03-02T13:18:50.452418358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:50.455657 containerd[1823]: time="2026-03-02T13:18:50.455620517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=33841852" Mar 2 13:18:50.459805 containerd[1823]: time="2026-03-02T13:18:50.459728716Z" level=info msg="ImageCreate event name:\"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:50.466507 containerd[1823]: time="2026-03-02T13:18:50.466470353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:50.467711 containerd[1823]: time="2026-03-02T13:18:50.467280952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"33841706\" in 2.410552276s" Mar 2 13:18:50.467711 containerd[1823]: time="2026-03-02T13:18:50.467316472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\"" Mar 2 13:18:50.468732 containerd[1823]: time="2026-03-02T13:18:50.468284992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 13:18:50.484047 containerd[1823]: time="2026-03-02T13:18:50.484006586Z" level=info msg="CreateContainer within sandbox \"c70e78a6ee5913739ed6427a79d7f59256ee3e6c0fe24544913ba814bfa806ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 13:18:50.527890 containerd[1823]: time="2026-03-02T13:18:50.527847168Z" level=info msg="CreateContainer within sandbox \"c70e78a6ee5913739ed6427a79d7f59256ee3e6c0fe24544913ba814bfa806ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e4ca386ce0a7b421b5b98850561ade0a4cbbf18ff81ab0548e6ce921690847fe\"" Mar 2 13:18:50.528720 containerd[1823]: time="2026-03-02T13:18:50.528685528Z" level=info msg="StartContainer for \"e4ca386ce0a7b421b5b98850561ade0a4cbbf18ff81ab0548e6ce921690847fe\"" Mar 2 13:18:50.589446 containerd[1823]: time="2026-03-02T13:18:50.589286064Z" level=info msg="StartContainer for \"e4ca386ce0a7b421b5b98850561ade0a4cbbf18ff81ab0548e6ce921690847fe\" returns successfully" Mar 2 13:18:50.949356 kubelet[3331]: E0302 13:18:50.949248 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.949356 kubelet[3331]: W0302 13:18:50.949275 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.949356 kubelet[3331]: E0302 13:18:50.949297 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.949988 kubelet[3331]: E0302 13:18:50.949492 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.949988 kubelet[3331]: W0302 13:18:50.949502 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.949988 kubelet[3331]: E0302 13:18:50.949539 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.949988 kubelet[3331]: E0302 13:18:50.949690 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.949988 kubelet[3331]: W0302 13:18:50.949699 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.949988 kubelet[3331]: E0302 13:18:50.949708 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.950164 kubelet[3331]: E0302 13:18:50.950151 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.950205 kubelet[3331]: W0302 13:18:50.950165 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.950205 kubelet[3331]: E0302 13:18:50.950174 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.950416 kubelet[3331]: E0302 13:18:50.950377 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.950416 kubelet[3331]: W0302 13:18:50.950388 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.950416 kubelet[3331]: E0302 13:18:50.950399 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.950652 kubelet[3331]: E0302 13:18:50.950640 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.950652 kubelet[3331]: W0302 13:18:50.950650 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.950719 kubelet[3331]: E0302 13:18:50.950658 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.950810 kubelet[3331]: E0302 13:18:50.950799 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.950810 kubelet[3331]: W0302 13:18:50.950809 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.950965 kubelet[3331]: E0302 13:18:50.950817 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.951044 kubelet[3331]: E0302 13:18:50.951032 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.951044 kubelet[3331]: W0302 13:18:50.951042 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.951124 kubelet[3331]: E0302 13:18:50.951050 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.951209 kubelet[3331]: E0302 13:18:50.951198 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.951209 kubelet[3331]: W0302 13:18:50.951208 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.951297 kubelet[3331]: E0302 13:18:50.951234 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.951377 kubelet[3331]: E0302 13:18:50.951366 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.951377 kubelet[3331]: W0302 13:18:50.951376 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.951465 kubelet[3331]: E0302 13:18:50.951385 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.951602 kubelet[3331]: E0302 13:18:50.951590 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.951641 kubelet[3331]: W0302 13:18:50.951600 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.951641 kubelet[3331]: E0302 13:18:50.951626 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.951788 kubelet[3331]: E0302 13:18:50.951776 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.951826 kubelet[3331]: W0302 13:18:50.951787 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.951826 kubelet[3331]: E0302 13:18:50.951798 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.951997 kubelet[3331]: E0302 13:18:50.951985 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.952047 kubelet[3331]: W0302 13:18:50.952000 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.952047 kubelet[3331]: E0302 13:18:50.952008 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.952158 kubelet[3331]: E0302 13:18:50.952147 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.952158 kubelet[3331]: W0302 13:18:50.952157 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.952241 kubelet[3331]: E0302 13:18:50.952166 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.952336 kubelet[3331]: E0302 13:18:50.952325 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.952336 kubelet[3331]: W0302 13:18:50.952336 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.952403 kubelet[3331]: E0302 13:18:50.952345 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.984951 kubelet[3331]: E0302 13:18:50.984814 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.984951 kubelet[3331]: W0302 13:18:50.984834 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.984951 kubelet[3331]: E0302 13:18:50.984852 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.985244 kubelet[3331]: E0302 13:18:50.985134 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.985244 kubelet[3331]: W0302 13:18:50.985146 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.985244 kubelet[3331]: E0302 13:18:50.985156 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.986014 kubelet[3331]: E0302 13:18:50.985717 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.986014 kubelet[3331]: W0302 13:18:50.985731 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.986014 kubelet[3331]: E0302 13:18:50.985743 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.986014 kubelet[3331]: E0302 13:18:50.986000 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.986014 kubelet[3331]: W0302 13:18:50.986016 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.986177 kubelet[3331]: E0302 13:18:50.986028 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.986712 kubelet[3331]: E0302 13:18:50.986232 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.986712 kubelet[3331]: W0302 13:18:50.986245 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.986712 kubelet[3331]: E0302 13:18:50.986257 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.986712 kubelet[3331]: E0302 13:18:50.986467 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.986712 kubelet[3331]: W0302 13:18:50.986478 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.986712 kubelet[3331]: E0302 13:18:50.986489 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.986712 kubelet[3331]: E0302 13:18:50.986656 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.987370 kubelet[3331]: W0302 13:18:50.986664 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.987968 kubelet[3331]: E0302 13:18:50.987809 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.988850 kubelet[3331]: E0302 13:18:50.988830 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.989293 kubelet[3331]: W0302 13:18:50.989163 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.989293 kubelet[3331]: E0302 13:18:50.989188 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.989803 kubelet[3331]: E0302 13:18:50.989696 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.989803 kubelet[3331]: W0302 13:18:50.989708 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.989803 kubelet[3331]: E0302 13:18:50.989733 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.990208 kubelet[3331]: E0302 13:18:50.990119 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.990208 kubelet[3331]: W0302 13:18:50.990131 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.990208 kubelet[3331]: E0302 13:18:50.990158 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.990633 kubelet[3331]: E0302 13:18:50.990585 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.990633 kubelet[3331]: W0302 13:18:50.990597 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.990633 kubelet[3331]: E0302 13:18:50.990609 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.991254 kubelet[3331]: E0302 13:18:50.991043 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.991254 kubelet[3331]: W0302 13:18:50.991058 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.991254 kubelet[3331]: E0302 13:18:50.991075 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.991422 kubelet[3331]: E0302 13:18:50.991285 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.991422 kubelet[3331]: W0302 13:18:50.991298 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.991422 kubelet[3331]: E0302 13:18:50.991312 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.991498 kubelet[3331]: E0302 13:18:50.991468 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.991498 kubelet[3331]: W0302 13:18:50.991476 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.991498 kubelet[3331]: E0302 13:18:50.991485 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.991690 kubelet[3331]: E0302 13:18:50.991676 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.991690 kubelet[3331]: W0302 13:18:50.991689 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.991747 kubelet[3331]: E0302 13:18:50.991698 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.992073 kubelet[3331]: E0302 13:18:50.991971 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.992073 kubelet[3331]: W0302 13:18:50.991985 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.992073 kubelet[3331]: E0302 13:18:50.991997 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.992479 kubelet[3331]: E0302 13:18:50.992467 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.992658 kubelet[3331]: W0302 13:18:50.992530 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.992658 kubelet[3331]: E0302 13:18:50.992547 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:50.992796 kubelet[3331]: E0302 13:18:50.992785 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:50.992876 kubelet[3331]: W0302 13:18:50.992840 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:50.992876 kubelet[3331]: E0302 13:18:50.992855 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.744835 kubelet[3331]: E0302 13:18:51.744791 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:18:51.893746 kubelet[3331]: I0302 13:18:51.893340 3331 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:18:51.959587 kubelet[3331]: E0302 13:18:51.959552 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.959587 kubelet[3331]: W0302 13:18:51.959578 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.960001 kubelet[3331]: E0302 13:18:51.959598 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.960001 kubelet[3331]: E0302 13:18:51.959781 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.960001 kubelet[3331]: W0302 13:18:51.959789 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.960001 kubelet[3331]: E0302 13:18:51.959799 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.960342 kubelet[3331]: E0302 13:18:51.960326 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.960342 kubelet[3331]: W0302 13:18:51.960341 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.960407 kubelet[3331]: E0302 13:18:51.960353 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.960578 kubelet[3331]: E0302 13:18:51.960564 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.960578 kubelet[3331]: W0302 13:18:51.960576 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.960641 kubelet[3331]: E0302 13:18:51.960588 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.960808 kubelet[3331]: E0302 13:18:51.960794 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.960808 kubelet[3331]: W0302 13:18:51.960805 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.960867 kubelet[3331]: E0302 13:18:51.960814 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.960993 kubelet[3331]: E0302 13:18:51.960979 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.960993 kubelet[3331]: W0302 13:18:51.960990 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.961037 kubelet[3331]: E0302 13:18:51.961000 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.961174 kubelet[3331]: E0302 13:18:51.961161 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.961174 kubelet[3331]: W0302 13:18:51.961171 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962148 kubelet[3331]: E0302 13:18:51.961180 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962148 kubelet[3331]: E0302 13:18:51.961354 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962148 kubelet[3331]: W0302 13:18:51.961384 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962148 kubelet[3331]: E0302 13:18:51.961395 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962148 kubelet[3331]: E0302 13:18:51.961568 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962148 kubelet[3331]: W0302 13:18:51.961575 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962148 kubelet[3331]: E0302 13:18:51.961584 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962148 kubelet[3331]: E0302 13:18:51.961732 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962148 kubelet[3331]: W0302 13:18:51.961741 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962148 kubelet[3331]: E0302 13:18:51.961749 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962385 kubelet[3331]: E0302 13:18:51.961889 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962385 kubelet[3331]: W0302 13:18:51.961896 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962385 kubelet[3331]: E0302 13:18:51.961904 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962385 kubelet[3331]: E0302 13:18:51.962069 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962385 kubelet[3331]: W0302 13:18:51.962077 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962385 kubelet[3331]: E0302 13:18:51.962092 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962385 kubelet[3331]: E0302 13:18:51.962267 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962385 kubelet[3331]: W0302 13:18:51.962275 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962385 kubelet[3331]: E0302 13:18:51.962283 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962572 kubelet[3331]: E0302 13:18:51.962528 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962572 kubelet[3331]: W0302 13:18:51.962539 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962572 kubelet[3331]: E0302 13:18:51.962550 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.962741 kubelet[3331]: E0302 13:18:51.962723 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.962741 kubelet[3331]: W0302 13:18:51.962737 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.962791 kubelet[3331]: E0302 13:18:51.962755 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.993671 kubelet[3331]: E0302 13:18:51.993586 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.993671 kubelet[3331]: W0302 13:18:51.993608 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.993671 kubelet[3331]: E0302 13:18:51.993629 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.994302 kubelet[3331]: E0302 13:18:51.994004 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.994302 kubelet[3331]: W0302 13:18:51.994019 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.994302 kubelet[3331]: E0302 13:18:51.994033 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.994551 kubelet[3331]: E0302 13:18:51.994406 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.994551 kubelet[3331]: W0302 13:18:51.994423 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.994551 kubelet[3331]: E0302 13:18:51.994435 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.994950 kubelet[3331]: E0302 13:18:51.994874 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.994950 kubelet[3331]: W0302 13:18:51.994887 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.994950 kubelet[3331]: E0302 13:18:51.994899 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.996709 kubelet[3331]: E0302 13:18:51.996689 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.996709 kubelet[3331]: W0302 13:18:51.996705 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.996844 kubelet[3331]: E0302 13:18:51.996717 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.996942 kubelet[3331]: E0302 13:18:51.996924 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.996942 kubelet[3331]: W0302 13:18:51.996942 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.997016 kubelet[3331]: E0302 13:18:51.996953 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.997385 kubelet[3331]: E0302 13:18:51.997284 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.997385 kubelet[3331]: W0302 13:18:51.997298 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.997385 kubelet[3331]: E0302 13:18:51.997310 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.997751 kubelet[3331]: E0302 13:18:51.997737 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.997751 kubelet[3331]: W0302 13:18:51.997750 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.997839 kubelet[3331]: E0302 13:18:51.997762 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.998099 kubelet[3331]: E0302 13:18:51.997972 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.998099 kubelet[3331]: W0302 13:18:51.997982 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.998099 kubelet[3331]: E0302 13:18:51.997993 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.999461 kubelet[3331]: E0302 13:18:51.999278 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.999461 kubelet[3331]: W0302 13:18:51.999295 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.999461 kubelet[3331]: E0302 13:18:51.999307 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.999890 kubelet[3331]: E0302 13:18:51.999468 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.999890 kubelet[3331]: W0302 13:18:51.999475 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.999890 kubelet[3331]: E0302 13:18:51.999491 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:51.999890 kubelet[3331]: E0302 13:18:51.999698 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:51.999890 kubelet[3331]: W0302 13:18:51.999708 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:51.999890 kubelet[3331]: E0302 13:18:51.999719 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:52.000172 kubelet[3331]: E0302 13:18:52.000032 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:52.000172 kubelet[3331]: W0302 13:18:52.000040 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:52.000172 kubelet[3331]: E0302 13:18:52.000050 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:52.000510 kubelet[3331]: E0302 13:18:52.000191 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:52.000510 kubelet[3331]: W0302 13:18:52.000198 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:52.000510 kubelet[3331]: E0302 13:18:52.000208 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:52.000510 kubelet[3331]: E0302 13:18:52.000385 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:52.000510 kubelet[3331]: W0302 13:18:52.000393 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:52.000510 kubelet[3331]: E0302 13:18:52.000402 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:52.001028 kubelet[3331]: E0302 13:18:52.000599 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:52.001028 kubelet[3331]: W0302 13:18:52.000607 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:52.001028 kubelet[3331]: E0302 13:18:52.000616 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:52.001028 kubelet[3331]: E0302 13:18:52.000787 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:52.001028 kubelet[3331]: W0302 13:18:52.000795 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:52.001028 kubelet[3331]: E0302 13:18:52.000802 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:52.001545 kubelet[3331]: E0302 13:18:52.001151 3331 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:18:52.001545 kubelet[3331]: W0302 13:18:52.001159 3331 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:18:52.001545 kubelet[3331]: E0302 13:18:52.001169 3331 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:18:52.053590 containerd[1823]: time="2026-03-02T13:18:52.053523958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:52.056343 containerd[1823]: time="2026-03-02T13:18:52.056311117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4456989" Mar 2 13:18:52.060188 containerd[1823]: time="2026-03-02T13:18:52.060156036Z" level=info msg="ImageCreate event name:\"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:52.065581 containerd[1823]: time="2026-03-02T13:18:52.065548593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:52.066345 containerd[1823]: time="2026-03-02T13:18:52.066316193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"5854474\" in 1.597999841s" Mar 2 13:18:52.066397 containerd[1823]: time="2026-03-02T13:18:52.066350273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\"" Mar 2 13:18:52.073874 containerd[1823]: time="2026-03-02T13:18:52.073720510Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 13:18:52.120382 containerd[1823]: time="2026-03-02T13:18:52.120335212Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cd047fc5b4588fe6cb3f7785e31d9e6684096873e1809e9f85de27bc8feeb187\"" Mar 2 13:18:52.121451 containerd[1823]: time="2026-03-02T13:18:52.121212291Z" level=info msg="StartContainer for \"cd047fc5b4588fe6cb3f7785e31d9e6684096873e1809e9f85de27bc8feeb187\"" Mar 2 13:18:52.177684 containerd[1823]: time="2026-03-02T13:18:52.177573269Z" level=info msg="StartContainer for \"cd047fc5b4588fe6cb3f7785e31d9e6684096873e1809e9f85de27bc8feeb187\" returns successfully" Mar 2 13:18:52.204736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd047fc5b4588fe6cb3f7785e31d9e6684096873e1809e9f85de27bc8feeb187-rootfs.mount: Deactivated successfully. Mar 2 13:18:52.917536 kubelet[3331]: I0302 13:18:52.917463 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59b849b855-xfp8h" podStartSLOduration=3.505216178 podStartE2EDuration="5.917439293s" podCreationTimestamp="2026-03-02 13:18:47 +0000 UTC" firstStartedPulling="2026-03-02 13:18:48.055953597 +0000 UTC m=+29.402242707" lastFinishedPulling="2026-03-02 13:18:50.468176752 +0000 UTC m=+31.814465822" observedRunningTime="2026-03-02 13:18:50.905784737 +0000 UTC m=+32.252073847" watchObservedRunningTime="2026-03-02 13:18:52.917439293 +0000 UTC m=+34.263728403" Mar 2 13:18:53.331492 containerd[1823]: time="2026-03-02T13:18:53.331180567Z" level=info msg="shim disconnected" id=cd047fc5b4588fe6cb3f7785e31d9e6684096873e1809e9f85de27bc8feeb187 namespace=k8s.io Mar 2 13:18:53.331492 containerd[1823]: time="2026-03-02T13:18:53.331252367Z" level=warning msg="cleaning up after shim disconnected" id=cd047fc5b4588fe6cb3f7785e31d9e6684096873e1809e9f85de27bc8feeb187 namespace=k8s.io Mar 2 13:18:53.331492 containerd[1823]: time="2026-03-02T13:18:53.331261847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:18:53.745250 kubelet[3331]: E0302 13:18:53.744516 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:18:53.902738 containerd[1823]: time="2026-03-02T13:18:53.902275459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 13:18:55.744688 kubelet[3331]: E0302 13:18:55.744269 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:18:57.745060 kubelet[3331]: E0302 13:18:57.744985 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:18:59.586134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832535238.mount: Deactivated successfully. Mar 2 13:18:59.696655 containerd[1823]: time="2026-03-02T13:18:59.696600882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:59.700548 containerd[1823]: time="2026-03-02T13:18:59.700346121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=153583198" Mar 2 13:18:59.703756 containerd[1823]: time="2026-03-02T13:18:59.703522560Z" level=info msg="ImageCreate event name:\"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:59.708940 containerd[1823]: time="2026-03-02T13:18:59.708888837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:18:59.709787 containerd[1823]: time="2026-03-02T13:18:59.709620717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"153583060\" in 5.807302778s" Mar 2 13:18:59.709787 containerd[1823]: time="2026-03-02T13:18:59.709664597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\"" Mar 2 13:18:59.718344 containerd[1823]: time="2026-03-02T13:18:59.718170794Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 13:18:59.744588 kubelet[3331]: E0302 13:18:59.744140 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:18:59.761353 containerd[1823]: time="2026-03-02T13:18:59.761304137Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"026505c263d13b9d41d791c09449424b8e837627088215f06f06b15b673009ee\"" Mar 2 13:18:59.762954 containerd[1823]: time="2026-03-02T13:18:59.762085536Z" level=info msg="StartContainer for \"026505c263d13b9d41d791c09449424b8e837627088215f06f06b15b673009ee\"" Mar 2 13:18:59.820730 containerd[1823]: time="2026-03-02T13:18:59.820604553Z" level=info msg="StartContainer for \"026505c263d13b9d41d791c09449424b8e837627088215f06f06b15b673009ee\" returns successfully" Mar 2 13:19:00.586661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-026505c263d13b9d41d791c09449424b8e837627088215f06f06b15b673009ee-rootfs.mount: Deactivated successfully. Mar 2 13:19:01.504379 containerd[1823]: time="2026-03-02T13:19:01.504265046Z" level=info msg="shim disconnected" id=026505c263d13b9d41d791c09449424b8e837627088215f06f06b15b673009ee namespace=k8s.io Mar 2 13:19:01.504379 containerd[1823]: time="2026-03-02T13:19:01.504369526Z" level=warning msg="cleaning up after shim disconnected" id=026505c263d13b9d41d791c09449424b8e837627088215f06f06b15b673009ee namespace=k8s.io Mar 2 13:19:01.504379 containerd[1823]: time="2026-03-02T13:19:01.504379366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:01.744950 kubelet[3331]: E0302 13:19:01.744893 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:19:01.918350 containerd[1823]: time="2026-03-02T13:19:01.918307242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 13:19:03.744254 kubelet[3331]: E0302 13:19:03.744190 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:19:04.473100 kubelet[3331]: I0302 13:19:04.473053 3331 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:19:05.152541 containerd[1823]: time="2026-03-02T13:19:05.152483778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:05.156113 containerd[1823]: time="2026-03-02T13:19:05.156072296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=65998037" Mar 2 13:19:05.160245 containerd[1823]: time="2026-03-02T13:19:05.159717935Z" level=info msg="ImageCreate event name:\"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:05.165509 containerd[1823]: time="2026-03-02T13:19:05.165472692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:05.166317 containerd[1823]: time="2026-03-02T13:19:05.166286692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"67395562\" in 3.24793765s" Mar 2 13:19:05.166357 containerd[1823]: time="2026-03-02T13:19:05.166319692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\"" Mar 2 13:19:05.174830 containerd[1823]: time="2026-03-02T13:19:05.174793489Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 13:19:05.202160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467030722.mount: Deactivated successfully. Mar 2 13:19:05.215905 containerd[1823]: time="2026-03-02T13:19:05.215780273Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"366ce88d5a936344791f6d81d73b5518ac6001f8379ea40b16f3edf360137377\"" Mar 2 13:19:05.217916 containerd[1823]: time="2026-03-02T13:19:05.217437432Z" level=info msg="StartContainer for \"366ce88d5a936344791f6d81d73b5518ac6001f8379ea40b16f3edf360137377\"" Mar 2 13:19:05.276903 containerd[1823]: time="2026-03-02T13:19:05.276858529Z" level=info msg="StartContainer for \"366ce88d5a936344791f6d81d73b5518ac6001f8379ea40b16f3edf360137377\" returns successfully" Mar 2 13:19:05.744400 kubelet[3331]: E0302 13:19:05.744349 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:19:07.536084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-366ce88d5a936344791f6d81d73b5518ac6001f8379ea40b16f3edf360137377-rootfs.mount: Deactivated successfully. Mar 2 13:19:07.542745 containerd[1823]: time="2026-03-02T13:19:07.542511205Z" level=info msg="shim disconnected" id=366ce88d5a936344791f6d81d73b5518ac6001f8379ea40b16f3edf360137377 namespace=k8s.io Mar 2 13:19:07.542745 containerd[1823]: time="2026-03-02T13:19:07.542577125Z" level=warning msg="cleaning up after shim disconnected" id=366ce88d5a936344791f6d81d73b5518ac6001f8379ea40b16f3edf360137377 namespace=k8s.io Mar 2 13:19:07.542745 containerd[1823]: time="2026-03-02T13:19:07.542588885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:07.597287 kubelet[3331]: I0302 13:19:07.595727 3331 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 2 13:19:07.695613 kubelet[3331]: I0302 13:19:07.695570 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlmhx\" (UniqueName: \"kubernetes.io/projected/50fcad6a-99ef-4efe-a1fd-9a8fdec4c894-kube-api-access-rlmhx\") pod \"coredns-674b8bbfcf-wpl7z\" (UID: \"50fcad6a-99ef-4efe-a1fd-9a8fdec4c894\") " pod="kube-system/coredns-674b8bbfcf-wpl7z" Mar 2 13:19:07.695613 kubelet[3331]: I0302 13:19:07.695612 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa5ad6f-f895-405a-a8a9-e8a2755d3922-tigera-ca-bundle\") pod \"calico-kube-controllers-79f5b9f4dd-rxkxx\" (UID: \"9fa5ad6f-f895-405a-a8a9-e8a2755d3922\") " pod="calico-system/calico-kube-controllers-79f5b9f4dd-rxkxx" Mar 2 13:19:07.695775 kubelet[3331]: I0302 13:19:07.695632 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50fcad6a-99ef-4efe-a1fd-9a8fdec4c894-config-volume\") pod \"coredns-674b8bbfcf-wpl7z\" (UID: \"50fcad6a-99ef-4efe-a1fd-9a8fdec4c894\") " pod="kube-system/coredns-674b8bbfcf-wpl7z" Mar 2 13:19:07.695775 kubelet[3331]: I0302 13:19:07.695655 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4m4c\" (UniqueName: \"kubernetes.io/projected/9fa5ad6f-f895-405a-a8a9-e8a2755d3922-kube-api-access-x4m4c\") pod \"calico-kube-controllers-79f5b9f4dd-rxkxx\" (UID: \"9fa5ad6f-f895-405a-a8a9-e8a2755d3922\") " pod="calico-system/calico-kube-controllers-79f5b9f4dd-rxkxx" Mar 2 13:19:07.747204 containerd[1823]: time="2026-03-02T13:19:07.747130285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7z7n,Uid:d18edb38-1e4f-4532-a511-77bd2348b866,Namespace:calico-system,Attempt:0,}" Mar 2 13:19:07.796868 kubelet[3331]: I0302 13:19:07.796613 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7de56beb-b944-464f-b0da-30cd4d29203d-goldmane-ca-bundle\") pod \"goldmane-9566f57b5-gmf2s\" (UID: \"7de56beb-b944-464f-b0da-30cd4d29203d\") " pod="calico-system/goldmane-9566f57b5-gmf2s" Mar 2 13:19:07.796868 kubelet[3331]: I0302 13:19:07.796659 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3f5c3e-eee1-49dd-bb69-50cbab1da62e-config-volume\") pod \"coredns-674b8bbfcf-8bwzw\" (UID: \"8c3f5c3e-eee1-49dd-bb69-50cbab1da62e\") " pod="kube-system/coredns-674b8bbfcf-8bwzw" Mar 2 13:19:07.796868 kubelet[3331]: I0302 13:19:07.796677 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc9zm\" (UniqueName: \"kubernetes.io/projected/8c3f5c3e-eee1-49dd-bb69-50cbab1da62e-kube-api-access-jc9zm\") pod \"coredns-674b8bbfcf-8bwzw\" (UID: \"8c3f5c3e-eee1-49dd-bb69-50cbab1da62e\") " pod="kube-system/coredns-674b8bbfcf-8bwzw" Mar 2 13:19:07.796868 kubelet[3331]: I0302 13:19:07.796693 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e78720ad-3c55-4873-8609-03d4fed7bb97-calico-apiserver-certs\") pod \"calico-apiserver-5868557fd9-rd986\" (UID: \"e78720ad-3c55-4873-8609-03d4fed7bb97\") " pod="calico-system/calico-apiserver-5868557fd9-rd986" Mar 2 13:19:07.796868 kubelet[3331]: I0302 13:19:07.796723 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9hvr\" (UniqueName: \"kubernetes.io/projected/cfba4bfd-025c-49ed-8a95-a9de57a117e4-kube-api-access-q9hvr\") pod \"calico-apiserver-5868557fd9-5p6xj\" (UID: \"cfba4bfd-025c-49ed-8a95-a9de57a117e4\") " pod="calico-system/calico-apiserver-5868557fd9-5p6xj" Mar 2 13:19:07.797094 kubelet[3331]: I0302 13:19:07.796739 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mptx\" (UniqueName: \"kubernetes.io/projected/e78720ad-3c55-4873-8609-03d4fed7bb97-kube-api-access-6mptx\") pod \"calico-apiserver-5868557fd9-rd986\" (UID: \"e78720ad-3c55-4873-8609-03d4fed7bb97\") " pod="calico-system/calico-apiserver-5868557fd9-rd986" Mar 2 13:19:07.799118 kubelet[3331]: I0302 13:19:07.796766 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlw4h\" (UniqueName: \"kubernetes.io/projected/4184bf88-6be3-4dd6-80a8-377006297c7a-kube-api-access-hlw4h\") pod \"whisker-8db54fd4c-5qjmq\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " pod="calico-system/whisker-8db54fd4c-5qjmq" Mar 2 13:19:07.799118 kubelet[3331]: I0302 13:19:07.797525 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7de56beb-b944-464f-b0da-30cd4d29203d-goldmane-key-pair\") pod \"goldmane-9566f57b5-gmf2s\" (UID: \"7de56beb-b944-464f-b0da-30cd4d29203d\") " pod="calico-system/goldmane-9566f57b5-gmf2s" Mar 2 13:19:07.799118 kubelet[3331]: I0302 13:19:07.797581 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de56beb-b944-464f-b0da-30cd4d29203d-config\") pod \"goldmane-9566f57b5-gmf2s\" (UID: \"7de56beb-b944-464f-b0da-30cd4d29203d\") " pod="calico-system/goldmane-9566f57b5-gmf2s" Mar 2 13:19:07.799118 kubelet[3331]: I0302 13:19:07.798289 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-nginx-config\") pod \"whisker-8db54fd4c-5qjmq\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " pod="calico-system/whisker-8db54fd4c-5qjmq" Mar 2 13:19:07.799118 kubelet[3331]: I0302 13:19:07.798464 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-backend-key-pair\") pod \"whisker-8db54fd4c-5qjmq\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " pod="calico-system/whisker-8db54fd4c-5qjmq" Mar 2 13:19:07.799363 kubelet[3331]: I0302 13:19:07.798508 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cfba4bfd-025c-49ed-8a95-a9de57a117e4-calico-apiserver-certs\") pod \"calico-apiserver-5868557fd9-5p6xj\" (UID: \"cfba4bfd-025c-49ed-8a95-a9de57a117e4\") " pod="calico-system/calico-apiserver-5868557fd9-5p6xj" Mar 2 13:19:07.799363 kubelet[3331]: I0302 13:19:07.798704 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-ca-bundle\") pod \"whisker-8db54fd4c-5qjmq\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " pod="calico-system/whisker-8db54fd4c-5qjmq" Mar 2 13:19:07.799363 kubelet[3331]: I0302 13:19:07.798732 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkh8m\" (UniqueName: \"kubernetes.io/projected/7de56beb-b944-464f-b0da-30cd4d29203d-kube-api-access-lkh8m\") pod \"goldmane-9566f57b5-gmf2s\" (UID: \"7de56beb-b944-464f-b0da-30cd4d29203d\") " pod="calico-system/goldmane-9566f57b5-gmf2s" Mar 2 13:19:07.864929 containerd[1823]: time="2026-03-02T13:19:07.864883159Z" level=error msg="Failed to destroy network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:07.865501 containerd[1823]: time="2026-03-02T13:19:07.865361278Z" level=error msg="encountered an error cleaning up failed sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:07.865501 containerd[1823]: time="2026-03-02T13:19:07.865410798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7z7n,Uid:d18edb38-1e4f-4532-a511-77bd2348b866,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:07.865694 kubelet[3331]: E0302 13:19:07.865639 3331 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:07.865743 kubelet[3331]: E0302 13:19:07.865718 3331 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j7z7n" Mar 2 13:19:07.865790 kubelet[3331]: E0302 13:19:07.865739 3331 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j7z7n" Mar 2 13:19:07.865819 kubelet[3331]: E0302 13:19:07.865796 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j7z7n_calico-system(d18edb38-1e4f-4532-a511-77bd2348b866)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j7z7n_calico-system(d18edb38-1e4f-4532-a511-77bd2348b866)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:19:07.931598 kubelet[3331]: I0302 13:19:07.931559 3331 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:19:07.934618 containerd[1823]: time="2026-03-02T13:19:07.933747572Z" level=info msg="StopPodSandbox for \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\"" Mar 2 13:19:07.934618 containerd[1823]: time="2026-03-02T13:19:07.933922772Z" level=info msg="Ensure that sandbox ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43 in task-service has been cleanup successfully" Mar 2 13:19:07.955317 containerd[1823]: time="2026-03-02T13:19:07.954789204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f5b9f4dd-rxkxx,Uid:9fa5ad6f-f895-405a-a8a9-e8a2755d3922,Namespace:calico-system,Attempt:0,}" Mar 2 13:19:07.956059 containerd[1823]: time="2026-03-02T13:19:07.956028763Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 13:19:07.965002 containerd[1823]: time="2026-03-02T13:19:07.964945200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wpl7z,Uid:50fcad6a-99ef-4efe-a1fd-9a8fdec4c894,Namespace:kube-system,Attempt:0,}" Mar 2 13:19:07.965539 containerd[1823]: time="2026-03-02T13:19:07.965187640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8bwzw,Uid:8c3f5c3e-eee1-49dd-bb69-50cbab1da62e,Namespace:kube-system,Attempt:0,}" Mar 2 13:19:07.975807 containerd[1823]: time="2026-03-02T13:19:07.975538995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868557fd9-rd986,Uid:e78720ad-3c55-4873-8609-03d4fed7bb97,Namespace:calico-system,Attempt:0,}" Mar 2 13:19:07.978078 containerd[1823]: time="2026-03-02T13:19:07.977011195Z" level=error msg="StopPodSandbox for \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\" failed" error="failed to destroy network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:07.978143 kubelet[3331]: E0302 13:19:07.977195 3331 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:19:07.978143 kubelet[3331]: E0302 13:19:07.977270 3331 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43"} Mar 2 13:19:07.978143 kubelet[3331]: E0302 13:19:07.977318 3331 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d18edb38-1e4f-4532-a511-77bd2348b866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 13:19:07.978143 kubelet[3331]: E0302 13:19:07.977341 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d18edb38-1e4f-4532-a511-77bd2348b866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j7z7n" podUID="d18edb38-1e4f-4532-a511-77bd2348b866" Mar 2 13:19:07.981185 containerd[1823]: time="2026-03-02T13:19:07.980871753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8db54fd4c-5qjmq,Uid:4184bf88-6be3-4dd6-80a8-377006297c7a,Namespace:calico-system,Attempt:0,}" Mar 2 13:19:07.996315 containerd[1823]: time="2026-03-02T13:19:07.995974308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-gmf2s,Uid:7de56beb-b944-464f-b0da-30cd4d29203d,Namespace:calico-system,Attempt:0,}" Mar 2 13:19:07.998227 containerd[1823]: time="2026-03-02T13:19:07.998185987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868557fd9-5p6xj,Uid:cfba4bfd-025c-49ed-8a95-a9de57a117e4,Namespace:calico-system,Attempt:0,}" Mar 2 13:19:08.121960 containerd[1823]: time="2026-03-02T13:19:08.121912778Z" level=info msg="CreateContainer within sandbox \"afaa36dcc1b8f58a809c08eda30e20b41c3e8637ea5716b0219789ee57d84b78\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c75793ac000b38d2e98fd2c52d8914af00b081ea860301b04a0895a9815b7012\"" Mar 2 13:19:08.122747 containerd[1823]: time="2026-03-02T13:19:08.122619218Z" level=info msg="StartContainer for \"c75793ac000b38d2e98fd2c52d8914af00b081ea860301b04a0895a9815b7012\"" Mar 2 13:19:08.186483 containerd[1823]: time="2026-03-02T13:19:08.186340513Z" level=info msg="StartContainer for \"c75793ac000b38d2e98fd2c52d8914af00b081ea860301b04a0895a9815b7012\" returns successfully" Mar 2 13:19:08.327008 containerd[1823]: time="2026-03-02T13:19:08.326801498Z" level=error msg="Failed to destroy network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:08.330771 containerd[1823]: time="2026-03-02T13:19:08.329722337Z" level=error msg="encountered an error cleaning up failed sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:08.330771 containerd[1823]: time="2026-03-02T13:19:08.329810337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f5b9f4dd-rxkxx,Uid:9fa5ad6f-f895-405a-a8a9-e8a2755d3922,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:08.330962 kubelet[3331]: E0302 13:19:08.330058 3331 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:08.330962 kubelet[3331]: E0302 13:19:08.330114 3331 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f5b9f4dd-rxkxx" Mar 2 13:19:08.330962 kubelet[3331]: E0302 13:19:08.330137 3331 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79f5b9f4dd-rxkxx" Mar 2 13:19:08.331423 kubelet[3331]: E0302 13:19:08.330188 3331 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79f5b9f4dd-rxkxx_calico-system(9fa5ad6f-f895-405a-a8a9-e8a2755d3922)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79f5b9f4dd-rxkxx_calico-system(9fa5ad6f-f895-405a-a8a9-e8a2755d3922)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79f5b9f4dd-rxkxx" podUID="9fa5ad6f-f895-405a-a8a9-e8a2755d3922" Mar 2 13:19:08.571890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43-shm.mount: Deactivated successfully. Mar 2 13:19:08.802947 systemd-networkd[1401]: calif52d6cf9e3f: Link UP Mar 2 13:19:08.803847 systemd-networkd[1401]: calif52d6cf9e3f: Gained carrier Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.427 [ERROR][4321] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.473 [INFO][4321] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0 calico-apiserver-5868557fd9- calico-system e78720ad-3c55-4873-8609-03d4fed7bb97 900 0 2026-03-02 13:18:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5868557fd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c calico-apiserver-5868557fd9-rd986 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif52d6cf9e3f [] [] }} ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.473 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.635 [INFO][4394] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" HandleID="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.703 [INFO][4394] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" HandleID="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005dc360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"calico-apiserver-5868557fd9-rd986", "timestamp":"2026-03-02 13:19:08.635014338 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001de160)} Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.703 [INFO][4394] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.703 [INFO][4394] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.703 [INFO][4394] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.723 [INFO][4394] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.730 [INFO][4394] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.748 [INFO][4394] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.753 [INFO][4394] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.755 [INFO][4394] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.755 [INFO][4394] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.757 [INFO][4394] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001 Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.761 [INFO][4394] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.771 [INFO][4394] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.129/26] block=192.168.54.128/26 handle="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.771 [INFO][4394] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.129/26] handle="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.771 [INFO][4394] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:08.833118 containerd[1823]: 2026-03-02 13:19:08.771 [INFO][4394] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.129/26] IPv6=[] ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" HandleID="k8s-pod-network.aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" Mar 2 13:19:08.836109 containerd[1823]: 2026-03-02 13:19:08.775 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0", GenerateName:"calico-apiserver-5868557fd9-", Namespace:"calico-system", SelfLink:"", UID:"e78720ad-3c55-4873-8609-03d4fed7bb97", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868557fd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"calico-apiserver-5868557fd9-rd986", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif52d6cf9e3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:08.836109 containerd[1823]: 2026-03-02 13:19:08.776 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.129/32] ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" Mar 2 13:19:08.836109 containerd[1823]: 2026-03-02 13:19:08.776 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif52d6cf9e3f ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" Mar 2 13:19:08.836109 containerd[1823]: 2026-03-02 13:19:08.805 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" Mar 2 13:19:08.836109 containerd[1823]: 2026-03-02 13:19:08.805 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0", GenerateName:"calico-apiserver-5868557fd9-", Namespace:"calico-system", SelfLink:"", UID:"e78720ad-3c55-4873-8609-03d4fed7bb97", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868557fd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001", Pod:"calico-apiserver-5868557fd9-rd986", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif52d6cf9e3f", MAC:"c2:ce:3e:0b:ab:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:08.836109 containerd[1823]: 2026-03-02 13:19:08.829 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-rd986" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--rd986-eth0" Mar 2 13:19:08.870632 containerd[1823]: time="2026-03-02T13:19:08.870509766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:08.870632 containerd[1823]: time="2026-03-02T13:19:08.870602166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:08.871117 containerd[1823]: time="2026-03-02T13:19:08.870882446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:08.871117 containerd[1823]: time="2026-03-02T13:19:08.871041406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:08.899588 systemd[1]: run-containerd-runc-k8s.io-aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001-runc.9WJGlK.mount: Deactivated successfully. Mar 2 13:19:08.905182 systemd-networkd[1401]: cali37c81025048: Link UP Mar 2 13:19:08.905966 systemd-networkd[1401]: cali37c81025048: Gained carrier Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.448 [ERROR][4332] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.471 [INFO][4332] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0 goldmane-9566f57b5- calico-system 7de56beb-b944-464f-b0da-30cd4d29203d 901 0 2026-03-02 13:18:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9566f57b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c goldmane-9566f57b5-gmf2s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali37c81025048 [] [] }} ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.471 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.676 [INFO][4395] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" HandleID="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Workload="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.717 [INFO][4395] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" HandleID="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Workload="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ec4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"goldmane-9566f57b5-gmf2s", "timestamp":"2026-03-02 13:19:08.676415282 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000204f20)} Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.717 [INFO][4395] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.771 [INFO][4395] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.771 [INFO][4395] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.825 [INFO][4395] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.836 [INFO][4395] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.843 [INFO][4395] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.845 [INFO][4395] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.848 [INFO][4395] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.848 [INFO][4395] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.849 [INFO][4395] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62 Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.864 [INFO][4395] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.880 [INFO][4395] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.130/26] block=192.168.54.128/26 handle="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.880 [INFO][4395] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.130/26] handle="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.881 [INFO][4395] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:08.945301 containerd[1823]: 2026-03-02 13:19:08.881 [INFO][4395] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.130/26] IPv6=[] ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" HandleID="k8s-pod-network.9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Workload="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" Mar 2 13:19:08.947179 containerd[1823]: 2026-03-02 13:19:08.885 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"7de56beb-b944-464f-b0da-30cd4d29203d", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"goldmane-9566f57b5-gmf2s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37c81025048", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:08.947179 containerd[1823]: 2026-03-02 13:19:08.885 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.130/32] ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" Mar 2 13:19:08.947179 containerd[1823]: 2026-03-02 13:19:08.885 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37c81025048 ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" Mar 2 13:19:08.947179 containerd[1823]: 2026-03-02 13:19:08.908 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" Mar 2 13:19:08.947179 containerd[1823]: 2026-03-02 13:19:08.912 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"7de56beb-b944-464f-b0da-30cd4d29203d", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62", Pod:"goldmane-9566f57b5-gmf2s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali37c81025048", MAC:"7e:85:a1:b4:c7:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:08.947179 containerd[1823]: 2026-03-02 13:19:08.938 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62" Namespace="calico-system" Pod="goldmane-9566f57b5-gmf2s" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-goldmane--9566f57b5--gmf2s-eth0" Mar 2 13:19:08.949388 kubelet[3331]: I0302 13:19:08.948514 3331 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:08.954275 containerd[1823]: time="2026-03-02T13:19:08.954201653Z" level=info msg="StopPodSandbox for \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\"" Mar 2 13:19:08.964689 containerd[1823]: time="2026-03-02T13:19:08.964642929Z" level=info msg="Ensure that sandbox 40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c in task-service has been cleanup successfully" Mar 2 13:19:08.969403 containerd[1823]: time="2026-03-02T13:19:08.969286248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868557fd9-rd986,Uid:e78720ad-3c55-4873-8609-03d4fed7bb97,Namespace:calico-system,Attempt:0,} returns sandbox id \"aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001\"" Mar 2 13:19:08.976478 containerd[1823]: time="2026-03-02T13:19:08.976441485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:19:08.998769 kubelet[3331]: I0302 13:19:08.998627 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9wr9t" podStartSLOduration=4.973453666 podStartE2EDuration="21.998608116s" podCreationTimestamp="2026-03-02 13:18:47 +0000 UTC" firstStartedPulling="2026-03-02 13:18:48.142132362 +0000 UTC m=+29.488421472" lastFinishedPulling="2026-03-02 13:19:05.167286812 +0000 UTC m=+46.513575922" observedRunningTime="2026-03-02 13:19:08.998063076 +0000 UTC m=+50.344352186" watchObservedRunningTime="2026-03-02 13:19:08.998608116 +0000 UTC m=+50.344897226" Mar 2 13:19:09.008076 containerd[1823]: time="2026-03-02T13:19:09.007773913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:09.008076 containerd[1823]: time="2026-03-02T13:19:09.007850032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:09.008076 containerd[1823]: time="2026-03-02T13:19:09.007860992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.008406 containerd[1823]: time="2026-03-02T13:19:09.008084592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.035913 systemd-networkd[1401]: cali418a81f6b2a: Link UP Mar 2 13:19:09.041447 systemd-networkd[1401]: cali418a81f6b2a: Gained carrier Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.415 [ERROR][4299] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.479 [INFO][4299] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0 coredns-674b8bbfcf- kube-system 50fcad6a-99ef-4efe-a1fd-9a8fdec4c894 899 0 2026-03-02 13:18:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c coredns-674b8bbfcf-wpl7z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali418a81f6b2a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.479 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.688 [INFO][4406] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" HandleID="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Workload="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.718 [INFO][4406] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" HandleID="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Workload="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036a290), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"coredns-674b8bbfcf-wpl7z", "timestamp":"2026-03-02 13:19:08.688667517 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003c8420)} Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.718 [INFO][4406] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.881 [INFO][4406] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.882 [INFO][4406] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.921 [INFO][4406] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.945 [INFO][4406] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.955 [INFO][4406] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.959 [INFO][4406] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.965 [INFO][4406] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.966 [INFO][4406] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.968 [INFO][4406] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4 Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:08.986 [INFO][4406] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:09.007 [INFO][4406] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.131/26] block=192.168.54.128/26 handle="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:09.007 [INFO][4406] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.131/26] handle="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:09.007 [INFO][4406] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:09.071228 containerd[1823]: 2026-03-02 13:19:09.007 [INFO][4406] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.131/26] IPv6=[] ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" HandleID="k8s-pod-network.a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Workload="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" Mar 2 13:19:09.071856 containerd[1823]: 2026-03-02 13:19:09.030 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"50fcad6a-99ef-4efe-a1fd-9a8fdec4c894", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"coredns-674b8bbfcf-wpl7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali418a81f6b2a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.071856 containerd[1823]: 2026-03-02 13:19:09.030 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.131/32] ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" Mar 2 13:19:09.071856 containerd[1823]: 2026-03-02 13:19:09.030 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali418a81f6b2a ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" Mar 2 13:19:09.071856 containerd[1823]: 2026-03-02 13:19:09.035 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" Mar 2 13:19:09.071856 containerd[1823]: 2026-03-02 13:19:09.035 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"50fcad6a-99ef-4efe-a1fd-9a8fdec4c894", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4", Pod:"coredns-674b8bbfcf-wpl7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali418a81f6b2a", MAC:"ba:3b:f2:8c:45:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.071856 containerd[1823]: 2026-03-02 13:19:09.056 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4" Namespace="kube-system" Pod="coredns-674b8bbfcf-wpl7z" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--wpl7z-eth0" Mar 2 13:19:09.109792 containerd[1823]: time="2026-03-02T13:19:09.109658673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-gmf2s,Uid:7de56beb-b944-464f-b0da-30cd4d29203d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62\"" Mar 2 13:19:09.142412 systemd-networkd[1401]: calif6a78848fb1: Link UP Mar 2 13:19:09.154073 systemd-networkd[1401]: calif6a78848fb1: Gained carrier Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:08.477 [ERROR][4309] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:08.540 [INFO][4309] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0 coredns-674b8bbfcf- kube-system 8c3f5c3e-eee1-49dd-bb69-50cbab1da62e 896 0 2026-03-02 13:18:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c coredns-674b8bbfcf-8bwzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif6a78848fb1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:08.541 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:08.689 [INFO][4414] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" HandleID="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Workload="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:08.721 [INFO][4414] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" HandleID="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Workload="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000418010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"coredns-674b8bbfcf-8bwzw", "timestamp":"2026-03-02 13:19:08.689134797 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000f2dc0)} Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:08.721 [INFO][4414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.008 [INFO][4414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.008 [INFO][4414] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.021 [INFO][4414] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.058 [INFO][4414] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.072 [INFO][4414] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.076 [INFO][4414] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.081 [INFO][4414] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.081 [INFO][4414] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.084 [INFO][4414] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515 Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.105 [INFO][4414] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.118 [INFO][4414] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.132/26] block=192.168.54.128/26 handle="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.118 [INFO][4414] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.132/26] handle="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.118 [INFO][4414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:09.202887 containerd[1823]: 2026-03-02 13:19:09.118 [INFO][4414] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.132/26] IPv6=[] ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" HandleID="k8s-pod-network.4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Workload="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" Mar 2 13:19:09.203526 containerd[1823]: 2026-03-02 13:19:09.129 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8c3f5c3e-eee1-49dd-bb69-50cbab1da62e", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"coredns-674b8bbfcf-8bwzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6a78848fb1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.203526 containerd[1823]: 2026-03-02 13:19:09.129 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.132/32] ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" Mar 2 13:19:09.203526 containerd[1823]: 2026-03-02 13:19:09.129 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6a78848fb1 ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" Mar 2 13:19:09.203526 containerd[1823]: 2026-03-02 13:19:09.140 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" Mar 2 13:19:09.203526 containerd[1823]: 2026-03-02 13:19:09.140 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8c3f5c3e-eee1-49dd-bb69-50cbab1da62e", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515", Pod:"coredns-674b8bbfcf-8bwzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif6a78848fb1", MAC:"36:16:24:4e:2d:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.203526 containerd[1823]: 2026-03-02 13:19:09.163 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515" Namespace="kube-system" Pod="coredns-674b8bbfcf-8bwzw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-coredns--674b8bbfcf--8bwzw-eth0" Mar 2 13:19:09.218306 containerd[1823]: time="2026-03-02T13:19:09.204268516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:09.218306 containerd[1823]: time="2026-03-02T13:19:09.204378196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:09.218306 containerd[1823]: time="2026-03-02T13:19:09.204393316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.218306 containerd[1823]: time="2026-03-02T13:19:09.204511476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.263210 systemd-networkd[1401]: calie718c337b78: Link UP Mar 2 13:19:09.263463 systemd-networkd[1401]: calie718c337b78: Gained carrier Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:08.412 [ERROR][4349] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:08.471 [INFO][4349] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0 calico-apiserver-5868557fd9- calico-system cfba4bfd-025c-49ed-8a95-a9de57a117e4 902 0 2026-03-02 13:18:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5868557fd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c calico-apiserver-5868557fd9-5p6xj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie718c337b78 [] [] }} ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:08.471 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:08.714 [INFO][4399] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" HandleID="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:08.730 [INFO][4399] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" HandleID="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000468080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"calico-apiserver-5868557fd9-5p6xj", "timestamp":"2026-03-02 13:19:08.714679307 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400026c6e0)} Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:08.731 [INFO][4399] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.119 [INFO][4399] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.120 [INFO][4399] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.127 [INFO][4399] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.152 [INFO][4399] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.197 [INFO][4399] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.208 [INFO][4399] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.213 [INFO][4399] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.213 [INFO][4399] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.215 [INFO][4399] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7 Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.223 [INFO][4399] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.239 [INFO][4399] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.133/26] block=192.168.54.128/26 handle="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.239 [INFO][4399] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.133/26] handle="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.239 [INFO][4399] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:09.290510 containerd[1823]: 2026-03-02 13:19:09.240 [INFO][4399] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.133/26] IPv6=[] ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" HandleID="k8s-pod-network.7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" Mar 2 13:19:09.291078 containerd[1823]: 2026-03-02 13:19:09.257 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0", GenerateName:"calico-apiserver-5868557fd9-", Namespace:"calico-system", SelfLink:"", UID:"cfba4bfd-025c-49ed-8a95-a9de57a117e4", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868557fd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"calico-apiserver-5868557fd9-5p6xj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie718c337b78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.291078 containerd[1823]: 2026-03-02 13:19:09.257 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.133/32] ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" Mar 2 13:19:09.291078 containerd[1823]: 2026-03-02 13:19:09.257 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie718c337b78 ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" Mar 2 13:19:09.291078 containerd[1823]: 2026-03-02 13:19:09.262 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" Mar 2 13:19:09.291078 containerd[1823]: 2026-03-02 13:19:09.263 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0", GenerateName:"calico-apiserver-5868557fd9-", Namespace:"calico-system", SelfLink:"", UID:"cfba4bfd-025c-49ed-8a95-a9de57a117e4", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5868557fd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7", Pod:"calico-apiserver-5868557fd9-5p6xj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie718c337b78", MAC:"22:94:f5:a9:f8:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.291078 containerd[1823]: 2026-03-02 13:19:09.285 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7" Namespace="calico-system" Pod="calico-apiserver-5868557fd9-5p6xj" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--apiserver--5868557fd9--5p6xj-eth0" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.708 [INFO][4377] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.709 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" iface="eth0" netns="/var/run/netns/cni-826187ee-b5f5-069e-7935-d2502b933bce" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.709 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" iface="eth0" netns="/var/run/netns/cni-826187ee-b5f5-069e-7935-d2502b933bce" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.710 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" iface="eth0" netns="/var/run/netns/cni-826187ee-b5f5-069e-7935-d2502b933bce" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.710 [INFO][4377] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.710 [INFO][4377] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.752 [INFO][4433] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" HandleID="k8s-pod-network.4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" Workload="ci--4081.3.101--5317e0e64c-k8s-whisker--8db54fd4c--5qjmq-eth0" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:08.752 [INFO][4433] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:09.246 [INFO][4433] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:09.258 [WARNING][4433] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" HandleID="k8s-pod-network.4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" Workload="ci--4081.3.101--5317e0e64c-k8s-whisker--8db54fd4c--5qjmq-eth0" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:09.258 [INFO][4433] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" HandleID="k8s-pod-network.4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" Workload="ci--4081.3.101--5317e0e64c-k8s-whisker--8db54fd4c--5qjmq-eth0" Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:09.264 [INFO][4433] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:09.297326 containerd[1823]: 2026-03-02 13:19:09.283 [INFO][4377] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e" Mar 2 13:19:09.301573 containerd[1823]: time="2026-03-02T13:19:09.300464918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:09.301573 containerd[1823]: time="2026-03-02T13:19:09.300527558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:09.301573 containerd[1823]: time="2026-03-02T13:19:09.300544238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.301573 containerd[1823]: time="2026-03-02T13:19:09.300630598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.312491 containerd[1823]: time="2026-03-02T13:19:09.312434074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8db54fd4c-5qjmq,Uid:4184bf88-6be3-4dd6-80a8-377006297c7a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:09.314327 kubelet[3331]: E0302 13:19:09.314210 3331 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:19:09.314327 kubelet[3331]: E0302 13:19:09.314276 3331 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8db54fd4c-5qjmq" Mar 2 13:19:09.344312 containerd[1823]: time="2026-03-02T13:19:09.344188541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wpl7z,Uid:50fcad6a-99ef-4efe-a1fd-9a8fdec4c894,Namespace:kube-system,Attempt:0,} returns sandbox id \"a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4\"" Mar 2 13:19:09.361409 containerd[1823]: time="2026-03-02T13:19:09.360058495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:09.361409 containerd[1823]: time="2026-03-02T13:19:09.360977775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:09.361409 containerd[1823]: time="2026-03-02T13:19:09.361005855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.361409 containerd[1823]: time="2026-03-02T13:19:09.361171735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.364705 containerd[1823]: time="2026-03-02T13:19:09.364660493Z" level=info msg="CreateContainer within sandbox \"a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.230 [INFO][4520] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.231 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" iface="eth0" netns="/var/run/netns/cni-a06e3d64-86d6-4d31-53df-c4aac11ef8d5" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.231 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" iface="eth0" netns="/var/run/netns/cni-a06e3d64-86d6-4d31-53df-c4aac11ef8d5" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.231 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" iface="eth0" netns="/var/run/netns/cni-a06e3d64-86d6-4d31-53df-c4aac11ef8d5" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.231 [INFO][4520] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.231 [INFO][4520] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.334 [INFO][4625] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.339 [INFO][4625] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.339 [INFO][4625] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.369 [WARNING][4625] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.369 [INFO][4625] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.373 [INFO][4625] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:09.382809 containerd[1823]: 2026-03-02 13:19:09.379 [INFO][4520] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:09.383470 containerd[1823]: time="2026-03-02T13:19:09.382941846Z" level=info msg="TearDown network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\" successfully" Mar 2 13:19:09.383470 containerd[1823]: time="2026-03-02T13:19:09.382991966Z" level=info msg="StopPodSandbox for \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\" returns successfully" Mar 2 13:19:09.384249 containerd[1823]: time="2026-03-02T13:19:09.383898166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f5b9f4dd-rxkxx,Uid:9fa5ad6f-f895-405a-a8a9-e8a2755d3922,Namespace:calico-system,Attempt:1,}" Mar 2 13:19:09.391346 containerd[1823]: time="2026-03-02T13:19:09.391190563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8bwzw,Uid:8c3f5c3e-eee1-49dd-bb69-50cbab1da62e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515\"" Mar 2 13:19:09.407401 containerd[1823]: time="2026-03-02T13:19:09.407149037Z" level=info msg="CreateContainer within sandbox \"4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:19:09.429282 containerd[1823]: time="2026-03-02T13:19:09.429105788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5868557fd9-5p6xj,Uid:cfba4bfd-025c-49ed-8a95-a9de57a117e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7\"" Mar 2 13:19:09.472337 containerd[1823]: time="2026-03-02T13:19:09.472293611Z" level=info msg="CreateContainer within sandbox \"a86ba26623613f6fdb104ba02374b3b24dbf951d7e04d583c2cbe06163bbdec4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"243f0ef562ded838c4001a1dac98b4e0c08ade7eaa73d7795f0a125b7cdacac9\"" Mar 2 13:19:09.474007 containerd[1823]: time="2026-03-02T13:19:09.473976491Z" level=info msg="StartContainer for \"243f0ef562ded838c4001a1dac98b4e0c08ade7eaa73d7795f0a125b7cdacac9\"" Mar 2 13:19:09.518279 containerd[1823]: time="2026-03-02T13:19:09.518239273Z" level=info msg="CreateContainer within sandbox \"4c037a69802c6b15e000aad9dab922ab5b33c60cde1e3869aeb608df22e7b515\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7b634a589ef5179238d6ab0342befa75411393d03b7d4339752515cf2cf7fd6\"" Mar 2 13:19:09.520625 containerd[1823]: time="2026-03-02T13:19:09.520454312Z" level=info msg="StartContainer for \"d7b634a589ef5179238d6ab0342befa75411393d03b7d4339752515cf2cf7fd6\"" Mar 2 13:19:09.541353 systemd[1]: run-netns-cni\x2d826187ee\x2db5f5\x2d069e\x2d7935\x2dd2502b933bce.mount: Deactivated successfully. Mar 2 13:19:09.541497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4077f534c1b676e6c63b864a9ad7b83f7250863aabd1f3eff8821b5a15db296e-shm.mount: Deactivated successfully. Mar 2 13:19:09.541581 systemd[1]: run-netns-cni\x2da06e3d64\x2d86d6\x2d4d31\x2d53df\x2dc4aac11ef8d5.mount: Deactivated successfully. Mar 2 13:19:09.571283 containerd[1823]: time="2026-03-02T13:19:09.571241853Z" level=info msg="StartContainer for \"243f0ef562ded838c4001a1dac98b4e0c08ade7eaa73d7795f0a125b7cdacac9\" returns successfully" Mar 2 13:19:09.652461 containerd[1823]: time="2026-03-02T13:19:09.652420701Z" level=info msg="StartContainer for \"d7b634a589ef5179238d6ab0342befa75411393d03b7d4339752515cf2cf7fd6\" returns successfully" Mar 2 13:19:09.707124 systemd-networkd[1401]: cali9c80c76804d: Link UP Mar 2 13:19:09.707750 systemd-networkd[1401]: cali9c80c76804d: Gained carrier Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.520 [ERROR][4747] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.569 [INFO][4747] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0 calico-kube-controllers-79f5b9f4dd- calico-system 9fa5ad6f-f895-405a-a8a9-e8a2755d3922 951 0 2026-03-02 13:18:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79f5b9f4dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c calico-kube-controllers-79f5b9f4dd-rxkxx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9c80c76804d [] [] }} ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.572 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.640 [INFO][4798] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" HandleID="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.654 [INFO][4798] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" HandleID="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c7f10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"calico-kube-controllers-79f5b9f4dd-rxkxx", "timestamp":"2026-03-02 13:19:09.640761825 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184580)} Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.654 [INFO][4798] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.654 [INFO][4798] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.654 [INFO][4798] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.657 [INFO][4798] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.665 [INFO][4798] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.670 [INFO][4798] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.672 [INFO][4798] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.674 [INFO][4798] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.674 [INFO][4798] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.675 [INFO][4798] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.681 [INFO][4798] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.696 [INFO][4798] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.134/26] block=192.168.54.128/26 handle="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.696 [INFO][4798] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.134/26] handle="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.696 [INFO][4798] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:09.725911 containerd[1823]: 2026-03-02 13:19:09.696 [INFO][4798] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.134/26] IPv6=[] ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" HandleID="k8s-pod-network.53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.726834 containerd[1823]: 2026-03-02 13:19:09.700 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0", GenerateName:"calico-kube-controllers-79f5b9f4dd-", Namespace:"calico-system", SelfLink:"", UID:"9fa5ad6f-f895-405a-a8a9-e8a2755d3922", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f5b9f4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"calico-kube-controllers-79f5b9f4dd-rxkxx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9c80c76804d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.726834 containerd[1823]: 2026-03-02 13:19:09.700 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.134/32] ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.726834 containerd[1823]: 2026-03-02 13:19:09.700 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c80c76804d ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.726834 containerd[1823]: 2026-03-02 13:19:09.707 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.726834 containerd[1823]: 2026-03-02 13:19:09.708 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0", GenerateName:"calico-kube-controllers-79f5b9f4dd-", Namespace:"calico-system", SelfLink:"", UID:"9fa5ad6f-f895-405a-a8a9-e8a2755d3922", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f5b9f4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed", Pod:"calico-kube-controllers-79f5b9f4dd-rxkxx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9c80c76804d", MAC:"f6:58:a5:6f:6d:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:09.726834 containerd[1823]: 2026-03-02 13:19:09.723 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed" Namespace="calico-system" Pod="calico-kube-controllers-79f5b9f4dd-rxkxx" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:09.758977 containerd[1823]: time="2026-03-02T13:19:09.758816579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:09.758977 containerd[1823]: time="2026-03-02T13:19:09.758883299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:09.758977 containerd[1823]: time="2026-03-02T13:19:09.758907259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.759410 containerd[1823]: time="2026-03-02T13:19:09.759371659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:09.848271 containerd[1823]: time="2026-03-02T13:19:09.847634985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79f5b9f4dd-rxkxx,Uid:9fa5ad6f-f895-405a-a8a9-e8a2755d3922,Namespace:calico-system,Attempt:1,} returns sandbox id \"53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed\"" Mar 2 13:19:10.045383 kubelet[3331]: I0302 13:19:10.042904 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wpl7z" podStartSLOduration=45.042884348 podStartE2EDuration="45.042884348s" podCreationTimestamp="2026-03-02 13:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:19:10.004172924 +0000 UTC m=+51.350462034" watchObservedRunningTime="2026-03-02 13:19:10.042884348 +0000 UTC m=+51.389173498" Mar 2 13:19:10.074012 systemd-networkd[1401]: cali418a81f6b2a: Gained IPv6LL Mar 2 13:19:10.133071 kubelet[3331]: I0302 13:19:10.132318 3331 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlw4h\" (UniqueName: \"kubernetes.io/projected/4184bf88-6be3-4dd6-80a8-377006297c7a-kube-api-access-hlw4h\") pod \"4184bf88-6be3-4dd6-80a8-377006297c7a\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " Mar 2 13:19:10.133071 kubelet[3331]: I0302 13:19:10.132360 3331 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-nginx-config\") pod \"4184bf88-6be3-4dd6-80a8-377006297c7a\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " Mar 2 13:19:10.133071 kubelet[3331]: I0302 13:19:10.132378 3331 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-backend-key-pair\") pod \"4184bf88-6be3-4dd6-80a8-377006297c7a\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " Mar 2 13:19:10.133071 kubelet[3331]: I0302 13:19:10.132422 3331 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-ca-bundle\") pod \"4184bf88-6be3-4dd6-80a8-377006297c7a\" (UID: \"4184bf88-6be3-4dd6-80a8-377006297c7a\") " Mar 2 13:19:10.134251 kubelet[3331]: I0302 13:19:10.134206 3331 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "4184bf88-6be3-4dd6-80a8-377006297c7a" (UID: "4184bf88-6be3-4dd6-80a8-377006297c7a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:19:10.141801 kubelet[3331]: I0302 13:19:10.141752 3331 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4184bf88-6be3-4dd6-80a8-377006297c7a-kube-api-access-hlw4h" (OuterVolumeSpecName: "kube-api-access-hlw4h") pod "4184bf88-6be3-4dd6-80a8-377006297c7a" (UID: "4184bf88-6be3-4dd6-80a8-377006297c7a"). InnerVolumeSpecName "kube-api-access-hlw4h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:19:10.142450 kubelet[3331]: I0302 13:19:10.142414 3331 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4184bf88-6be3-4dd6-80a8-377006297c7a" (UID: "4184bf88-6be3-4dd6-80a8-377006297c7a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:19:10.143350 kubelet[3331]: I0302 13:19:10.143317 3331 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4184bf88-6be3-4dd6-80a8-377006297c7a" (UID: "4184bf88-6be3-4dd6-80a8-377006297c7a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:19:10.233804 kubelet[3331]: I0302 13:19:10.233773 3331 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-nginx-config\") on node \"ci-4081.3.101-5317e0e64c\" DevicePath \"\"" Mar 2 13:19:10.234912 kubelet[3331]: I0302 13:19:10.234270 3331 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-backend-key-pair\") on node \"ci-4081.3.101-5317e0e64c\" DevicePath \"\"" Mar 2 13:19:10.234912 kubelet[3331]: I0302 13:19:10.234296 3331 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4184bf88-6be3-4dd6-80a8-377006297c7a-whisker-ca-bundle\") on node \"ci-4081.3.101-5317e0e64c\" DevicePath \"\"" Mar 2 13:19:10.234912 kubelet[3331]: I0302 13:19:10.234305 3331 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hlw4h\" (UniqueName: \"kubernetes.io/projected/4184bf88-6be3-4dd6-80a8-377006297c7a-kube-api-access-hlw4h\") on node \"ci-4081.3.101-5317e0e64c\" DevicePath \"\"" Mar 2 13:19:10.265370 systemd-networkd[1401]: calif52d6cf9e3f: Gained IPv6LL Mar 2 13:19:10.331252 kernel: calico-node[4896]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 13:19:10.521451 systemd-networkd[1401]: cali37c81025048: Gained IPv6LL Mar 2 13:19:10.537731 systemd[1]: var-lib-kubelet-pods-4184bf88\x2d6be3\x2d4dd6\x2d80a8\x2d377006297c7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlw4h.mount: Deactivated successfully. Mar 2 13:19:10.537889 systemd[1]: var-lib-kubelet-pods-4184bf88\x2d6be3\x2d4dd6\x2d80a8\x2d377006297c7a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 13:19:10.798365 systemd-networkd[1401]: vxlan.calico: Link UP Mar 2 13:19:10.798375 systemd-networkd[1401]: vxlan.calico: Gained carrier Mar 2 13:19:11.051465 kubelet[3331]: I0302 13:19:11.050991 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8bwzw" podStartSLOduration=46.050969192 podStartE2EDuration="46.050969192s" podCreationTimestamp="2026-03-02 13:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:19:10.071073617 +0000 UTC m=+51.417362727" watchObservedRunningTime="2026-03-02 13:19:11.050969192 +0000 UTC m=+52.397258302" Mar 2 13:19:11.097381 systemd-networkd[1401]: calif6a78848fb1: Gained IPv6LL Mar 2 13:19:11.161352 systemd-networkd[1401]: calie718c337b78: Gained IPv6LL Mar 2 13:19:11.248419 kubelet[3331]: I0302 13:19:11.248367 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5f7c5f53-b270-4e0a-92cd-9ceee50d975a-nginx-config\") pod \"whisker-6ff86997b5-xp5qw\" (UID: \"5f7c5f53-b270-4e0a-92cd-9ceee50d975a\") " pod="calico-system/whisker-6ff86997b5-xp5qw" Mar 2 13:19:11.248733 kubelet[3331]: I0302 13:19:11.248609 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5f7c5f53-b270-4e0a-92cd-9ceee50d975a-whisker-backend-key-pair\") pod \"whisker-6ff86997b5-xp5qw\" (UID: \"5f7c5f53-b270-4e0a-92cd-9ceee50d975a\") " pod="calico-system/whisker-6ff86997b5-xp5qw" Mar 2 13:19:11.248733 kubelet[3331]: I0302 13:19:11.248645 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f7c5f53-b270-4e0a-92cd-9ceee50d975a-whisker-ca-bundle\") pod \"whisker-6ff86997b5-xp5qw\" (UID: \"5f7c5f53-b270-4e0a-92cd-9ceee50d975a\") " pod="calico-system/whisker-6ff86997b5-xp5qw" Mar 2 13:19:11.248733 kubelet[3331]: I0302 13:19:11.248664 3331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljz8\" (UniqueName: \"kubernetes.io/projected/5f7c5f53-b270-4e0a-92cd-9ceee50d975a-kube-api-access-fljz8\") pod \"whisker-6ff86997b5-xp5qw\" (UID: \"5f7c5f53-b270-4e0a-92cd-9ceee50d975a\") " pod="calico-system/whisker-6ff86997b5-xp5qw" Mar 2 13:19:11.439071 containerd[1823]: time="2026-03-02T13:19:11.438890859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ff86997b5-xp5qw,Uid:5f7c5f53-b270-4e0a-92cd-9ceee50d975a,Namespace:calico-system,Attempt:0,}" Mar 2 13:19:11.609339 systemd-networkd[1401]: cali9c80c76804d: Gained IPv6LL Mar 2 13:19:11.836718 systemd-networkd[1401]: cali7a57c6eddaf: Link UP Mar 2 13:19:11.837793 systemd-networkd[1401]: cali7a57c6eddaf: Gained carrier Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.736 [INFO][5120] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0 whisker-6ff86997b5- calico-system 5f7c5f53-b270-4e0a-92cd-9ceee50d975a 998 0 2026-03-02 13:19:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6ff86997b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c whisker-6ff86997b5-xp5qw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7a57c6eddaf [] [] }} ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.737 [INFO][5120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.773 [INFO][5134] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" HandleID="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Workload="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.784 [INFO][5134] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" HandleID="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Workload="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"whisker-6ff86997b5-xp5qw", "timestamp":"2026-03-02 13:19:11.773734785 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001ecf20)} Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.785 [INFO][5134] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.786 [INFO][5134] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.786 [INFO][5134] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.793 [INFO][5134] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.798 [INFO][5134] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.803 [INFO][5134] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.805 [INFO][5134] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.808 [INFO][5134] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.808 [INFO][5134] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.810 [INFO][5134] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.818 [INFO][5134] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.829 [INFO][5134] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.135/26] block=192.168.54.128/26 handle="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.829 [INFO][5134] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.135/26] handle="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.829 [INFO][5134] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:11.860134 containerd[1823]: 2026-03-02 13:19:11.829 [INFO][5134] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.135/26] IPv6=[] ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" HandleID="k8s-pod-network.4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Workload="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" Mar 2 13:19:11.861294 containerd[1823]: 2026-03-02 13:19:11.832 [INFO][5120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0", GenerateName:"whisker-6ff86997b5-", Namespace:"calico-system", SelfLink:"", UID:"5f7c5f53-b270-4e0a-92cd-9ceee50d975a", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 19, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ff86997b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"whisker-6ff86997b5-xp5qw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a57c6eddaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:11.861294 containerd[1823]: 2026-03-02 13:19:11.832 [INFO][5120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.135/32] ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" Mar 2 13:19:11.861294 containerd[1823]: 2026-03-02 13:19:11.832 [INFO][5120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a57c6eddaf ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" Mar 2 13:19:11.861294 containerd[1823]: 2026-03-02 13:19:11.838 [INFO][5120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" Mar 2 13:19:11.861294 containerd[1823]: 2026-03-02 13:19:11.838 [INFO][5120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0", GenerateName:"whisker-6ff86997b5-", Namespace:"calico-system", SelfLink:"", UID:"5f7c5f53-b270-4e0a-92cd-9ceee50d975a", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 19, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ff86997b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f", Pod:"whisker-6ff86997b5-xp5qw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a57c6eddaf", MAC:"2e:72:3e:f2:fd:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:11.861294 containerd[1823]: 2026-03-02 13:19:11.851 [INFO][5120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f" Namespace="calico-system" Pod="whisker-6ff86997b5-xp5qw" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-whisker--6ff86997b5--xp5qw-eth0" Mar 2 13:19:11.892873 containerd[1823]: time="2026-03-02T13:19:11.892183424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:11.892873 containerd[1823]: time="2026-03-02T13:19:11.892280344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:11.892873 containerd[1823]: time="2026-03-02T13:19:11.892296624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:11.892873 containerd[1823]: time="2026-03-02T13:19:11.892783784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:11.993343 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Mar 2 13:19:12.020757 containerd[1823]: time="2026-03-02T13:19:12.020697100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ff86997b5-xp5qw,Uid:5f7c5f53-b270-4e0a-92cd-9ceee50d975a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f\"" Mar 2 13:19:12.436172 containerd[1823]: time="2026-03-02T13:19:12.435388398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:12.439305 containerd[1823]: time="2026-03-02T13:19:12.439258036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=45512258" Mar 2 13:19:12.442804 containerd[1823]: time="2026-03-02T13:19:12.442752555Z" level=info msg="ImageCreate event name:\"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:12.449558 containerd[1823]: time="2026-03-02T13:19:12.449426913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:12.450497 containerd[1823]: time="2026-03-02T13:19:12.450349273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"46909799\" in 3.473752388s" Mar 2 13:19:12.450497 containerd[1823]: time="2026-03-02T13:19:12.450406593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\"" Mar 2 13:19:12.453306 containerd[1823]: time="2026-03-02T13:19:12.452533112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 13:19:12.459622 containerd[1823]: time="2026-03-02T13:19:12.459464270Z" level=info msg="CreateContainer within sandbox \"aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:19:12.496583 containerd[1823]: time="2026-03-02T13:19:12.496532817Z" level=info msg="CreateContainer within sandbox \"aeb8e913c7c4acf922319a8aa57c5f651bb4e8d6eeef57cc008daaac2abcf001\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca497c07a76bb20c4a639f3636f7282e45d62de35061241eac5a06cc7125dcb5\"" Mar 2 13:19:12.497367 containerd[1823]: time="2026-03-02T13:19:12.497343337Z" level=info msg="StartContainer for \"ca497c07a76bb20c4a639f3636f7282e45d62de35061241eac5a06cc7125dcb5\"" Mar 2 13:19:12.571975 containerd[1823]: time="2026-03-02T13:19:12.571928831Z" level=info msg="StartContainer for \"ca497c07a76bb20c4a639f3636f7282e45d62de35061241eac5a06cc7125dcb5\" returns successfully" Mar 2 13:19:12.748305 kubelet[3331]: I0302 13:19:12.747100 3331 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4184bf88-6be3-4dd6-80a8-377006297c7a" path="/var/lib/kubelet/pods/4184bf88-6be3-4dd6-80a8-377006297c7a/volumes" Mar 2 13:19:13.059391 kubelet[3331]: I0302 13:19:13.057717 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5868557fd9-rd986" podStartSLOduration=24.581655198 podStartE2EDuration="28.057698905s" podCreationTimestamp="2026-03-02 13:18:45 +0000 UTC" firstStartedPulling="2026-03-02 13:19:08.975299965 +0000 UTC m=+50.321589075" lastFinishedPulling="2026-03-02 13:19:12.451343672 +0000 UTC m=+53.797632782" observedRunningTime="2026-03-02 13:19:13.056685185 +0000 UTC m=+54.402974335" watchObservedRunningTime="2026-03-02 13:19:13.057698905 +0000 UTC m=+54.403987975" Mar 2 13:19:13.785522 systemd-networkd[1401]: cali7a57c6eddaf: Gained IPv6LL Mar 2 13:19:14.062073 kubelet[3331]: I0302 13:19:14.059174 3331 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:19:15.140808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699985720.mount: Deactivated successfully. Mar 2 13:19:16.134491 containerd[1823]: time="2026-03-02T13:19:16.134305410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:16.138414 containerd[1823]: time="2026-03-02T13:19:16.138323129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=51600693" Mar 2 13:19:16.142112 containerd[1823]: time="2026-03-02T13:19:16.141759688Z" level=info msg="ImageCreate event name:\"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:16.146786 containerd[1823]: time="2026-03-02T13:19:16.146742646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:16.147862 containerd[1823]: time="2026-03-02T13:19:16.147829886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"51600539\" in 3.695261614s" Mar 2 13:19:16.147980 containerd[1823]: time="2026-03-02T13:19:16.147965446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\"" Mar 2 13:19:16.149269 containerd[1823]: time="2026-03-02T13:19:16.149175325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:19:16.165518 containerd[1823]: time="2026-03-02T13:19:16.165481960Z" level=info msg="CreateContainer within sandbox \"9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 13:19:16.207791 containerd[1823]: time="2026-03-02T13:19:16.207746505Z" level=info msg="CreateContainer within sandbox \"9cc6e7116c078b741831d58d190dfcf74e86c72049b5d171654d1cccd82b0a62\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4cd8699e19f67b0ba136b2d192ed6a2e60ebafcb7fe4f8a7267930df0052002e\"" Mar 2 13:19:16.209790 containerd[1823]: time="2026-03-02T13:19:16.208588505Z" level=info msg="StartContainer for \"4cd8699e19f67b0ba136b2d192ed6a2e60ebafcb7fe4f8a7267930df0052002e\"" Mar 2 13:19:16.283576 containerd[1823]: time="2026-03-02T13:19:16.283531279Z" level=info msg="StartContainer for \"4cd8699e19f67b0ba136b2d192ed6a2e60ebafcb7fe4f8a7267930df0052002e\" returns successfully" Mar 2 13:19:16.477541 containerd[1823]: time="2026-03-02T13:19:16.477405973Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:16.484240 containerd[1823]: time="2026-03-02T13:19:16.482881571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 13:19:16.484944 containerd[1823]: time="2026-03-02T13:19:16.484896450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"46909799\" in 335.684885ms" Mar 2 13:19:16.485025 containerd[1823]: time="2026-03-02T13:19:16.484946170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\"" Mar 2 13:19:16.487858 containerd[1823]: time="2026-03-02T13:19:16.487550569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 13:19:16.494314 containerd[1823]: time="2026-03-02T13:19:16.494106487Z" level=info msg="CreateContainer within sandbox \"7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:19:16.530874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944150496.mount: Deactivated successfully. Mar 2 13:19:16.542793 containerd[1823]: time="2026-03-02T13:19:16.542515750Z" level=info msg="CreateContainer within sandbox \"7aee2d948546ff09d7e5db147deb96aadb94bf27db527f43f67b7813b57decc7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4ee16f967d9d03853d529e6c16771756c052eebea30e934a3f61ce4c679e51c4\"" Mar 2 13:19:16.544180 containerd[1823]: time="2026-03-02T13:19:16.543063510Z" level=info msg="StartContainer for \"4ee16f967d9d03853d529e6c16771756c052eebea30e934a3f61ce4c679e51c4\"" Mar 2 13:19:16.617993 containerd[1823]: time="2026-03-02T13:19:16.617917005Z" level=info msg="StartContainer for \"4ee16f967d9d03853d529e6c16771756c052eebea30e934a3f61ce4c679e51c4\" returns successfully" Mar 2 13:19:17.091250 kubelet[3331]: I0302 13:19:17.090057 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5868557fd9-5p6xj" podStartSLOduration=25.040571418 podStartE2EDuration="32.090036723s" podCreationTimestamp="2026-03-02 13:18:45 +0000 UTC" firstStartedPulling="2026-03-02 13:19:09.436312465 +0000 UTC m=+50.782601575" lastFinishedPulling="2026-03-02 13:19:16.48577777 +0000 UTC m=+57.832066880" observedRunningTime="2026-03-02 13:19:17.088702763 +0000 UTC m=+58.434991873" watchObservedRunningTime="2026-03-02 13:19:17.090036723 +0000 UTC m=+58.436325833" Mar 2 13:19:17.113724 kubelet[3331]: I0302 13:19:17.112178 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-9566f57b5-gmf2s" podStartSLOduration=24.184043739 podStartE2EDuration="31.112159715s" podCreationTimestamp="2026-03-02 13:18:46 +0000 UTC" firstStartedPulling="2026-03-02 13:19:09.220926949 +0000 UTC m=+50.567216059" lastFinishedPulling="2026-03-02 13:19:16.149042925 +0000 UTC m=+57.495332035" observedRunningTime="2026-03-02 13:19:17.111756835 +0000 UTC m=+58.458045945" watchObservedRunningTime="2026-03-02 13:19:17.112159715 +0000 UTC m=+58.458448905" Mar 2 13:19:18.068683 kubelet[3331]: I0302 13:19:18.068109 3331 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:19:18.783148 containerd[1823]: time="2026-03-02T13:19:18.783104138Z" level=info msg="StopPodSandbox for \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\"" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.844 [WARNING][5418] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0", GenerateName:"calico-kube-controllers-79f5b9f4dd-", Namespace:"calico-system", SelfLink:"", UID:"9fa5ad6f-f895-405a-a8a9-e8a2755d3922", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f5b9f4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed", Pod:"calico-kube-controllers-79f5b9f4dd-rxkxx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9c80c76804d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.844 [INFO][5418] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.844 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" iface="eth0" netns="" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.844 [INFO][5418] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.845 [INFO][5418] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.888 [INFO][5425] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.888 [INFO][5425] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.888 [INFO][5425] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.900 [WARNING][5425] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.900 [INFO][5425] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.901 [INFO][5425] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:18.909083 containerd[1823]: 2026-03-02 13:19:18.906 [INFO][5418] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:18.909083 containerd[1823]: time="2026-03-02T13:19:18.908977444Z" level=info msg="TearDown network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\" successfully" Mar 2 13:19:18.909083 containerd[1823]: time="2026-03-02T13:19:18.909034324Z" level=info msg="StopPodSandbox for \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\" returns successfully" Mar 2 13:19:18.910649 containerd[1823]: time="2026-03-02T13:19:18.910441443Z" level=info msg="RemovePodSandbox for \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\"" Mar 2 13:19:18.910649 containerd[1823]: time="2026-03-02T13:19:18.910479523Z" level=info msg="Forcibly stopping sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\"" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:18.962 [WARNING][5440] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0", GenerateName:"calico-kube-controllers-79f5b9f4dd-", Namespace:"calico-system", SelfLink:"", UID:"9fa5ad6f-f895-405a-a8a9-e8a2755d3922", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79f5b9f4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed", Pod:"calico-kube-controllers-79f5b9f4dd-rxkxx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9c80c76804d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:18.963 [INFO][5440] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:18.963 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" iface="eth0" netns="" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:18.963 [INFO][5440] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:18.963 [INFO][5440] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:19.001 [INFO][5447] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:19.001 [INFO][5447] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:19.001 [INFO][5447] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:19.013 [WARNING][5447] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:19.013 [INFO][5447] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" HandleID="k8s-pod-network.40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Workload="ci--4081.3.101--5317e0e64c-k8s-calico--kube--controllers--79f5b9f4dd--rxkxx-eth0" Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:19.015 [INFO][5447] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:19.023734 containerd[1823]: 2026-03-02 13:19:19.019 [INFO][5440] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c" Mar 2 13:19:19.023734 containerd[1823]: time="2026-03-02T13:19:19.023714034Z" level=info msg="TearDown network for sandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\" successfully" Mar 2 13:19:19.032714 containerd[1823]: time="2026-03-02T13:19:19.032371630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:19:19.032714 containerd[1823]: time="2026-03-02T13:19:19.032462630Z" level=info msg="RemovePodSandbox \"40ca91470db79a0de4393f98dbf11f7802baa9565fb079b65d49f7cd9a35843c\" returns successfully" Mar 2 13:19:19.298649 containerd[1823]: time="2026-03-02T13:19:19.298604556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:19.301656 containerd[1823]: time="2026-03-02T13:19:19.301614994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=49157508" Mar 2 13:19:19.304823 containerd[1823]: time="2026-03-02T13:19:19.304565153Z" level=info msg="ImageCreate event name:\"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:19.309993 containerd[1823]: time="2026-03-02T13:19:19.309935231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:19.310896 containerd[1823]: time="2026-03-02T13:19:19.310776350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"50555001\" in 2.823154261s" Mar 2 13:19:19.310896 containerd[1823]: time="2026-03-02T13:19:19.310811830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\"" Mar 2 13:19:19.312315 containerd[1823]: time="2026-03-02T13:19:19.312040830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 13:19:19.328559 containerd[1823]: time="2026-03-02T13:19:19.328512343Z" level=info msg="CreateContainer within sandbox \"53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 13:19:19.362144 containerd[1823]: time="2026-03-02T13:19:19.362092568Z" level=info msg="CreateContainer within sandbox \"53b4a3df19e002b93a458b2a5aaa41ca8f04f48c14fa665199038564b978b4ed\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9c89d3d911480793d90183451344e11dceab263d3ecb3a419ca905537604a29f\"" Mar 2 13:19:19.363018 containerd[1823]: time="2026-03-02T13:19:19.362979928Z" level=info msg="StartContainer for \"9c89d3d911480793d90183451344e11dceab263d3ecb3a419ca905537604a29f\"" Mar 2 13:19:19.448767 containerd[1823]: time="2026-03-02T13:19:19.448639131Z" level=info msg="StartContainer for \"9c89d3d911480793d90183451344e11dceab263d3ecb3a419ca905537604a29f\" returns successfully" Mar 2 13:19:20.100382 kubelet[3331]: I0302 13:19:20.100284 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79f5b9f4dd-rxkxx" podStartSLOduration=23.639549723000002 podStartE2EDuration="33.10026217s" podCreationTimestamp="2026-03-02 13:18:47 +0000 UTC" firstStartedPulling="2026-03-02 13:19:09.851207783 +0000 UTC m=+51.197496893" lastFinishedPulling="2026-03-02 13:19:19.31192023 +0000 UTC m=+60.658209340" observedRunningTime="2026-03-02 13:19:20.098301971 +0000 UTC m=+61.444591081" watchObservedRunningTime="2026-03-02 13:19:20.10026217 +0000 UTC m=+61.446551280" Mar 2 13:19:20.758138 containerd[1823]: time="2026-03-02T13:19:20.757537966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:20.760283 containerd[1823]: time="2026-03-02T13:19:20.760242845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=5881068" Mar 2 13:19:20.763412 containerd[1823]: time="2026-03-02T13:19:20.763381964Z" level=info msg="ImageCreate event name:\"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:20.768656 containerd[1823]: time="2026-03-02T13:19:20.768357282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:20.769150 containerd[1823]: time="2026-03-02T13:19:20.769116641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7278585\" in 1.457046291s" Mar 2 13:19:20.769209 containerd[1823]: time="2026-03-02T13:19:20.769151801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\"" Mar 2 13:19:20.777705 containerd[1823]: time="2026-03-02T13:19:20.777662398Z" level=info msg="CreateContainer within sandbox \"4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 13:19:20.819340 containerd[1823]: time="2026-03-02T13:19:20.818996140Z" level=info msg="CreateContainer within sandbox \"4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3b823f40bf82df36261012ea92bacdd17231701983119fa794e9a824d0e0fe20\"" Mar 2 13:19:20.821776 containerd[1823]: time="2026-03-02T13:19:20.820595059Z" level=info msg="StartContainer for \"3b823f40bf82df36261012ea92bacdd17231701983119fa794e9a824d0e0fe20\"" Mar 2 13:19:20.888454 containerd[1823]: time="2026-03-02T13:19:20.888407750Z" level=info msg="StartContainer for \"3b823f40bf82df36261012ea92bacdd17231701983119fa794e9a824d0e0fe20\" returns successfully" Mar 2 13:19:20.892088 containerd[1823]: time="2026-03-02T13:19:20.891919428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 13:19:21.746562 containerd[1823]: time="2026-03-02T13:19:21.746241540Z" level=info msg="StopPodSandbox for \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\"" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.798 [INFO][5581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.799 [INFO][5581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" iface="eth0" netns="/var/run/netns/cni-198effe6-4e56-2357-28e1-b4b158ac01fc" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.799 [INFO][5581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" iface="eth0" netns="/var/run/netns/cni-198effe6-4e56-2357-28e1-b4b158ac01fc" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.803 [INFO][5581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" iface="eth0" netns="/var/run/netns/cni-198effe6-4e56-2357-28e1-b4b158ac01fc" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.803 [INFO][5581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.803 [INFO][5581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.825 [INFO][5589] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.825 [INFO][5589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.825 [INFO][5589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.834 [WARNING][5589] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.834 [INFO][5589] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.835 [INFO][5589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:21.839988 containerd[1823]: 2026-03-02 13:19:21.838 [INFO][5581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:19:21.843696 containerd[1823]: time="2026-03-02T13:19:21.840747699Z" level=info msg="TearDown network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\" successfully" Mar 2 13:19:21.843696 containerd[1823]: time="2026-03-02T13:19:21.840899979Z" level=info msg="StopPodSandbox for \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\" returns successfully" Mar 2 13:19:21.843698 systemd[1]: run-netns-cni\x2d198effe6\x2d4e56\x2d2357\x2d28e1\x2db4b158ac01fc.mount: Deactivated successfully. Mar 2 13:19:21.844935 containerd[1823]: time="2026-03-02T13:19:21.844554018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7z7n,Uid:d18edb38-1e4f-4532-a511-77bd2348b866,Namespace:calico-system,Attempt:1,}" Mar 2 13:19:22.007153 systemd-networkd[1401]: califddba0a3ecb: Link UP Mar 2 13:19:22.007550 systemd-networkd[1401]: califddba0a3ecb: Gained carrier Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.915 [INFO][5600] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0 csi-node-driver- calico-system d18edb38-1e4f-4532-a511-77bd2348b866 1059 0 2026-03-02 13:18:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7494d65b57 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.101-5317e0e64c csi-node-driver-j7z7n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califddba0a3ecb [] [] }} ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.915 [INFO][5600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.943 [INFO][5608] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" HandleID="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.952 [INFO][5608] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" HandleID="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-5317e0e64c", "pod":"csi-node-driver-j7z7n", "timestamp":"2026-03-02 13:19:21.943410895 +0000 UTC"}, Hostname:"ci-4081.3.101-5317e0e64c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000266c60)} Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.953 [INFO][5608] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.953 [INFO][5608] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.953 [INFO][5608] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-5317e0e64c' Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.956 [INFO][5608] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.961 [INFO][5608] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.965 [INFO][5608] ipam/ipam.go 526: Trying affinity for 192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.969 [INFO][5608] ipam/ipam.go 160: Attempting to load block cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.971 [INFO][5608] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.971 [INFO][5608] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.973 [INFO][5608] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.978 [INFO][5608] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.998 [INFO][5608] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.54.136/26] block=192.168.54.128/26 handle="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.999 [INFO][5608] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.54.136/26] handle="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" host="ci-4081.3.101-5317e0e64c" Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.999 [INFO][5608] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:19:22.027426 containerd[1823]: 2026-03-02 13:19:21.999 [INFO][5608] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.54.136/26] IPv6=[] ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" HandleID="k8s-pod-network.375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:22.027965 containerd[1823]: 2026-03-02 13:19:22.003 [INFO][5600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18edb38-1e4f-4532-a511-77bd2348b866", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"", Pod:"csi-node-driver-j7z7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califddba0a3ecb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:22.027965 containerd[1823]: 2026-03-02 13:19:22.003 [INFO][5600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.136/32] ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:22.027965 containerd[1823]: 2026-03-02 13:19:22.003 [INFO][5600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califddba0a3ecb ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:22.027965 containerd[1823]: 2026-03-02 13:19:22.006 [INFO][5600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:22.027965 containerd[1823]: 2026-03-02 13:19:22.006 [INFO][5600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18edb38-1e4f-4532-a511-77bd2348b866", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd", Pod:"csi-node-driver-j7z7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califddba0a3ecb", MAC:"a2:c7:2d:c2:6d:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:19:22.027965 containerd[1823]: 2026-03-02 13:19:22.021 [INFO][5600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd" Namespace="calico-system" Pod="csi-node-driver-j7z7n" WorkloadEndpoint="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:19:22.065109 containerd[1823]: time="2026-03-02T13:19:22.064850763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:22.065109 containerd[1823]: time="2026-03-02T13:19:22.064906163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:22.065109 containerd[1823]: time="2026-03-02T13:19:22.064917283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:22.065109 containerd[1823]: time="2026-03-02T13:19:22.065004843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:22.105336 containerd[1823]: time="2026-03-02T13:19:22.105295425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7z7n,Uid:d18edb38-1e4f-4532-a511-77bd2348b866,Namespace:calico-system,Attempt:1,} returns sandbox id \"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd\"" Mar 2 13:19:22.862624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395327886.mount: Deactivated successfully. Mar 2 13:19:23.769519 systemd-networkd[1401]: califddba0a3ecb: Gained IPv6LL Mar 2 13:19:23.921283 containerd[1823]: time="2026-03-02T13:19:23.921233842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:23.926290 containerd[1823]: time="2026-03-02T13:19:23.926234200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=16420592" Mar 2 13:19:23.929888 containerd[1823]: time="2026-03-02T13:19:23.929852238Z" level=info msg="ImageCreate event name:\"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:23.936806 containerd[1823]: time="2026-03-02T13:19:23.936433195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:23.937337 containerd[1823]: time="2026-03-02T13:19:23.937292715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"16420422\" in 3.045113447s" Mar 2 13:19:23.937337 containerd[1823]: time="2026-03-02T13:19:23.937329675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\"" Mar 2 13:19:23.940433 containerd[1823]: time="2026-03-02T13:19:23.940027474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 13:19:23.959680 containerd[1823]: time="2026-03-02T13:19:23.959637585Z" level=info msg="CreateContainer within sandbox \"4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 13:19:24.014242 containerd[1823]: time="2026-03-02T13:19:24.014177962Z" level=info msg="CreateContainer within sandbox \"4c4372418f0acb11e990232b8690107df05fde72aee9befd873bccb1f952158f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f06ab876f65139d41ec529b2d8ac891d627e276c210e32a64db299651940dc14\"" Mar 2 13:19:24.015506 containerd[1823]: time="2026-03-02T13:19:24.015454601Z" level=info msg="StartContainer for \"f06ab876f65139d41ec529b2d8ac891d627e276c210e32a64db299651940dc14\"" Mar 2 13:19:24.094512 containerd[1823]: time="2026-03-02T13:19:24.093745048Z" level=info msg="StartContainer for \"f06ab876f65139d41ec529b2d8ac891d627e276c210e32a64db299651940dc14\" returns successfully" Mar 2 13:19:25.150787 kubelet[3331]: I0302 13:19:25.136549 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6ff86997b5-xp5qw" podStartSLOduration=2.219865263 podStartE2EDuration="14.136528318s" podCreationTimestamp="2026-03-02 13:19:11 +0000 UTC" firstStartedPulling="2026-03-02 13:19:12.023180059 +0000 UTC m=+53.369469169" lastFinishedPulling="2026-03-02 13:19:23.939843114 +0000 UTC m=+65.286132224" observedRunningTime="2026-03-02 13:19:25.13076388 +0000 UTC m=+66.477052990" watchObservedRunningTime="2026-03-02 13:19:25.136528318 +0000 UTC m=+66.482817468" Mar 2 13:19:25.243881 containerd[1823]: time="2026-03-02T13:19:25.243836432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:25.247366 containerd[1823]: time="2026-03-02T13:19:25.247327230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8255947" Mar 2 13:19:25.250680 containerd[1823]: time="2026-03-02T13:19:25.250288309Z" level=info msg="ImageCreate event name:\"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:25.256548 containerd[1823]: time="2026-03-02T13:19:25.256302866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:25.257238 containerd[1823]: time="2026-03-02T13:19:25.257049986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"9653472\" in 1.316681472s" Mar 2 13:19:25.257238 containerd[1823]: time="2026-03-02T13:19:25.257082946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\"" Mar 2 13:19:25.266050 containerd[1823]: time="2026-03-02T13:19:25.265853982Z" level=info msg="CreateContainer within sandbox \"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 13:19:25.309655 containerd[1823]: time="2026-03-02T13:19:25.309610803Z" level=info msg="CreateContainer within sandbox \"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"62b9f1b21ee741a78efd52dbc723943d0f4b5ee1c004c89288b1fac2eda8e5e5\"" Mar 2 13:19:25.312037 containerd[1823]: time="2026-03-02T13:19:25.311998682Z" level=info msg="StartContainer for \"62b9f1b21ee741a78efd52dbc723943d0f4b5ee1c004c89288b1fac2eda8e5e5\"" Mar 2 13:19:25.380901 containerd[1823]: time="2026-03-02T13:19:25.380854852Z" level=info msg="StartContainer for \"62b9f1b21ee741a78efd52dbc723943d0f4b5ee1c004c89288b1fac2eda8e5e5\" returns successfully" Mar 2 13:19:25.383935 containerd[1823]: time="2026-03-02T13:19:25.383896891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 13:19:26.788941 containerd[1823]: time="2026-03-02T13:19:26.788889645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:26.792090 containerd[1823]: time="2026-03-02T13:19:26.791900164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=13755078" Mar 2 13:19:26.795503 containerd[1823]: time="2026-03-02T13:19:26.794843203Z" level=info msg="ImageCreate event name:\"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:26.800562 containerd[1823]: time="2026-03-02T13:19:26.800523720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:19:26.801381 containerd[1823]: time="2026-03-02T13:19:26.801268680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"15152555\" in 1.417331509s" Mar 2 13:19:26.801381 containerd[1823]: time="2026-03-02T13:19:26.801307760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\"" Mar 2 13:19:26.808865 containerd[1823]: time="2026-03-02T13:19:26.808720517Z" level=info msg="CreateContainer within sandbox \"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 13:19:26.837602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267911364.mount: Deactivated successfully. Mar 2 13:19:26.847682 containerd[1823]: time="2026-03-02T13:19:26.847636740Z" level=info msg="CreateContainer within sandbox \"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"42ecf180540c1145e7313c177bb4e342aeecbb7b1d1343f1912fa00113902819\"" Mar 2 13:19:26.849558 containerd[1823]: time="2026-03-02T13:19:26.849368099Z" level=info msg="StartContainer for \"42ecf180540c1145e7313c177bb4e342aeecbb7b1d1343f1912fa00113902819\"" Mar 2 13:19:26.917807 containerd[1823]: time="2026-03-02T13:19:26.917670590Z" level=info msg="StartContainer for \"42ecf180540c1145e7313c177bb4e342aeecbb7b1d1343f1912fa00113902819\" returns successfully" Mar 2 13:19:27.462519 kubelet[3331]: I0302 13:19:27.462426 3331 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:19:27.486420 kubelet[3331]: I0302 13:19:27.486328 3331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j7z7n" podStartSLOduration=35.793698287 podStartE2EDuration="40.486311024s" podCreationTimestamp="2026-03-02 13:18:47 +0000 UTC" firstStartedPulling="2026-03-02 13:19:22.109387503 +0000 UTC m=+63.455676613" lastFinishedPulling="2026-03-02 13:19:26.80200028 +0000 UTC m=+68.148289350" observedRunningTime="2026-03-02 13:19:27.126667619 +0000 UTC m=+68.472956729" watchObservedRunningTime="2026-03-02 13:19:27.486311024 +0000 UTC m=+68.832600094" Mar 2 13:19:27.894170 kubelet[3331]: I0302 13:19:27.894131 3331 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 13:19:27.894170 kubelet[3331]: I0302 13:19:27.894170 3331 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 13:20:07.609501 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:55728.service - OpenSSH per-connection server daemon (10.200.16.10:55728). Mar 2 13:20:07.768181 kubelet[3331]: I0302 13:20:07.767390 3331 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:20:08.099234 sshd[5962]: Accepted publickey for core from 10.200.16.10 port 55728 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:08.101553 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:08.106142 systemd-logind[1790]: New session 10 of user core. Mar 2 13:20:08.112741 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:20:08.522437 sshd[5962]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:08.526950 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:55728.service: Deactivated successfully. Mar 2 13:20:08.530123 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:20:08.531311 systemd-logind[1790]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:20:08.532364 systemd-logind[1790]: Removed session 10. Mar 2 13:20:13.617565 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:60134.service - OpenSSH per-connection server daemon (10.200.16.10:60134). Mar 2 13:20:14.122949 sshd[5998]: Accepted publickey for core from 10.200.16.10 port 60134 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:14.123583 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:14.128909 systemd-logind[1790]: New session 11 of user core. Mar 2 13:20:14.133499 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:20:14.502723 systemd[1]: run-containerd-runc-k8s.io-4cd8699e19f67b0ba136b2d192ed6a2e60ebafcb7fe4f8a7267930df0052002e-runc.pZMMFt.mount: Deactivated successfully. Mar 2 13:20:14.651551 sshd[5998]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:14.656938 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:60134.service: Deactivated successfully. Mar 2 13:20:14.665989 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:20:14.671430 systemd-logind[1790]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:20:14.675493 systemd-logind[1790]: Removed session 11. Mar 2 13:20:18.087232 systemd[1]: run-containerd-runc-k8s.io-4cd8699e19f67b0ba136b2d192ed6a2e60ebafcb7fe4f8a7267930df0052002e-runc.e29gjr.mount: Deactivated successfully. Mar 2 13:20:19.037123 containerd[1823]: time="2026-03-02T13:20:19.036805096Z" level=info msg="StopPodSandbox for \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\"" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.070 [WARNING][6066] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18edb38-1e4f-4532-a511-77bd2348b866", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd", Pod:"csi-node-driver-j7z7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califddba0a3ecb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.070 [INFO][6066] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.070 [INFO][6066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" iface="eth0" netns="" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.070 [INFO][6066] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.070 [INFO][6066] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.090 [INFO][6074] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.090 [INFO][6074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.090 [INFO][6074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.099 [WARNING][6074] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.099 [INFO][6074] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.100 [INFO][6074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:20:19.105027 containerd[1823]: 2026-03-02 13:20:19.103 [INFO][6066] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.105478 containerd[1823]: time="2026-03-02T13:20:19.105093584Z" level=info msg="TearDown network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\" successfully" Mar 2 13:20:19.105478 containerd[1823]: time="2026-03-02T13:20:19.105127824Z" level=info msg="StopPodSandbox for \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\" returns successfully" Mar 2 13:20:19.105715 containerd[1823]: time="2026-03-02T13:20:19.105688824Z" level=info msg="RemovePodSandbox for \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\"" Mar 2 13:20:19.105761 containerd[1823]: time="2026-03-02T13:20:19.105742504Z" level=info msg="Forcibly stopping sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\"" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.143 [WARNING][6088] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d18edb38-1e4f-4532-a511-77bd2348b866", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 18, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-5317e0e64c", ContainerID:"375fce2a0d223b48f91405d0fdd95d7a18fadd5c0a5eea5562b3dd373e705bcd", Pod:"csi-node-driver-j7z7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califddba0a3ecb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.143 [INFO][6088] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.143 [INFO][6088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" iface="eth0" netns="" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.143 [INFO][6088] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.143 [INFO][6088] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.164 [INFO][6095] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.164 [INFO][6095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.164 [INFO][6095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.173 [WARNING][6095] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.173 [INFO][6095] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" HandleID="k8s-pod-network.ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Workload="ci--4081.3.101--5317e0e64c-k8s-csi--node--driver--j7z7n-eth0" Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.174 [INFO][6095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:20:19.178566 containerd[1823]: 2026-03-02 13:20:19.176 [INFO][6088] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43" Mar 2 13:20:19.179737 containerd[1823]: time="2026-03-02T13:20:19.178640030Z" level=info msg="TearDown network for sandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\" successfully" Mar 2 13:20:19.200392 containerd[1823]: time="2026-03-02T13:20:19.200339540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:20:19.200562 containerd[1823]: time="2026-03-02T13:20:19.200429580Z" level=info msg="RemovePodSandbox \"ebff7c46a1143709b80ab167f4afb3ddaa7187ea7d55d63e7d5162e437378d43\" returns successfully" Mar 2 13:20:19.736486 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:60142.service - OpenSSH per-connection server daemon (10.200.16.10:60142). Mar 2 13:20:20.097773 systemd[1]: run-containerd-runc-k8s.io-9c89d3d911480793d90183451344e11dceab263d3ecb3a419ca905537604a29f-runc.plynA5.mount: Deactivated successfully. Mar 2 13:20:20.218840 sshd[6101]: Accepted publickey for core from 10.200.16.10 port 60142 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:20.220184 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:20.224294 systemd-logind[1790]: New session 12 of user core. Mar 2 13:20:20.228562 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:20:20.637074 sshd[6101]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:20.640278 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:60142.service: Deactivated successfully. Mar 2 13:20:20.645384 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:20:20.646141 systemd-logind[1790]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:20:20.648522 systemd-logind[1790]: Removed session 12. Mar 2 13:20:25.722550 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:55516.service - OpenSSH per-connection server daemon (10.200.16.10:55516). Mar 2 13:20:26.212677 sshd[6133]: Accepted publickey for core from 10.200.16.10 port 55516 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:26.213646 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:26.218441 systemd-logind[1790]: New session 13 of user core. Mar 2 13:20:26.223530 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:20:26.634067 sshd[6133]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:26.637017 systemd-logind[1790]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:20:26.637273 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:55516.service: Deactivated successfully. Mar 2 13:20:26.643050 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:20:26.644259 systemd-logind[1790]: Removed session 13. Mar 2 13:20:31.718502 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:51706.service - OpenSSH per-connection server daemon (10.200.16.10:51706). Mar 2 13:20:32.205591 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 51706 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:32.206947 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:32.211153 systemd-logind[1790]: New session 14 of user core. Mar 2 13:20:32.217533 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:20:32.637663 sshd[6165]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:32.643620 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:51706.service: Deactivated successfully. Mar 2 13:20:32.647834 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:20:32.648918 systemd-logind[1790]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:20:32.649918 systemd-logind[1790]: Removed session 14. Mar 2 13:20:32.723243 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:51708.service - OpenSSH per-connection server daemon (10.200.16.10:51708). Mar 2 13:20:33.207377 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 51708 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:33.208851 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:33.213472 systemd-logind[1790]: New session 15 of user core. Mar 2 13:20:33.218493 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:20:33.712770 sshd[6180]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:33.717280 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:51708.service: Deactivated successfully. Mar 2 13:20:33.719175 systemd-logind[1790]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:20:33.721001 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:20:33.723441 systemd-logind[1790]: Removed session 15. Mar 2 13:20:33.797510 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:51720.service - OpenSSH per-connection server daemon (10.200.16.10:51720). Mar 2 13:20:34.286569 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 51720 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:34.288655 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:34.293420 systemd-logind[1790]: New session 16 of user core. Mar 2 13:20:34.298489 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:20:34.722104 sshd[6207]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:34.727487 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:51720.service: Deactivated successfully. Mar 2 13:20:34.731120 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:20:34.732333 systemd-logind[1790]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:20:34.733495 systemd-logind[1790]: Removed session 16. Mar 2 13:20:39.810529 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:51730.service - OpenSSH per-connection server daemon (10.200.16.10:51730). Mar 2 13:20:40.296550 sshd[6242]: Accepted publickey for core from 10.200.16.10 port 51730 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:40.294914 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:40.303264 systemd-logind[1790]: New session 17 of user core. Mar 2 13:20:40.312543 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:20:40.715543 sshd[6242]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:40.719892 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:51730.service: Deactivated successfully. Mar 2 13:20:40.723606 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:20:40.725795 systemd-logind[1790]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:20:40.727608 systemd-logind[1790]: Removed session 17. Mar 2 13:20:40.801252 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:58970.service - OpenSSH per-connection server daemon (10.200.16.10:58970). Mar 2 13:20:41.288599 sshd[6277]: Accepted publickey for core from 10.200.16.10 port 58970 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:41.289980 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:41.295419 systemd-logind[1790]: New session 18 of user core. Mar 2 13:20:41.302624 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:20:41.840348 sshd[6277]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:41.845133 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:58970.service: Deactivated successfully. Mar 2 13:20:41.851449 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:20:41.854913 systemd-logind[1790]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:20:41.856207 systemd-logind[1790]: Removed session 18. Mar 2 13:20:41.924527 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:58982.service - OpenSSH per-connection server daemon (10.200.16.10:58982). Mar 2 13:20:42.408439 sshd[6307]: Accepted publickey for core from 10.200.16.10 port 58982 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:42.410581 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:42.415403 systemd-logind[1790]: New session 19 of user core. Mar 2 13:20:42.417632 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:20:43.647464 sshd[6307]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:43.655548 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:58982.service: Deactivated successfully. Mar 2 13:20:43.667940 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:20:43.671270 systemd-logind[1790]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:20:43.672699 systemd-logind[1790]: Removed session 19. Mar 2 13:20:43.733007 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:58998.service - OpenSSH per-connection server daemon (10.200.16.10:58998). Mar 2 13:20:44.225636 sshd[6357]: Accepted publickey for core from 10.200.16.10 port 58998 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:44.227479 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:44.233899 systemd-logind[1790]: New session 20 of user core. Mar 2 13:20:44.239589 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:20:44.824255 sshd[6357]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:44.832607 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:58998.service: Deactivated successfully. Mar 2 13:20:44.838536 systemd-logind[1790]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:20:44.839277 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:20:44.843868 systemd-logind[1790]: Removed session 20. Mar 2 13:20:44.925668 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:59008.service - OpenSSH per-connection server daemon (10.200.16.10:59008). Mar 2 13:20:45.427728 sshd[6373]: Accepted publickey for core from 10.200.16.10 port 59008 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:45.429322 sshd[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:45.437067 systemd-logind[1790]: New session 21 of user core. Mar 2 13:20:45.443641 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:20:45.848864 sshd[6373]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:45.851854 systemd-logind[1790]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:20:45.852410 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:59008.service: Deactivated successfully. Mar 2 13:20:45.857052 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:20:45.859778 systemd-logind[1790]: Removed session 21. Mar 2 13:20:50.095005 systemd[1]: run-containerd-runc-k8s.io-9c89d3d911480793d90183451344e11dceab263d3ecb3a419ca905537604a29f-runc.Cn9Pgu.mount: Deactivated successfully. Mar 2 13:20:50.934522 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:46734.service - OpenSSH per-connection server daemon (10.200.16.10:46734). Mar 2 13:20:51.420248 sshd[6427]: Accepted publickey for core from 10.200.16.10 port 46734 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:51.421911 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:51.425764 systemd-logind[1790]: New session 22 of user core. Mar 2 13:20:51.435527 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:20:51.837483 sshd[6427]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:51.841212 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:46734.service: Deactivated successfully. Mar 2 13:20:51.844724 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:20:51.846679 systemd-logind[1790]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:20:51.848876 systemd-logind[1790]: Removed session 22. Mar 2 13:20:56.921752 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:46744.service - OpenSSH per-connection server daemon (10.200.16.10:46744). Mar 2 13:20:57.408017 sshd[6443]: Accepted publickey for core from 10.200.16.10 port 46744 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:20:57.424332 sshd[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:57.430284 systemd-logind[1790]: New session 23 of user core. Mar 2 13:20:57.438521 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:20:57.822764 sshd[6443]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:57.826565 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:46744.service: Deactivated successfully. Mar 2 13:20:57.831259 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:20:57.833016 systemd-logind[1790]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:20:57.834111 systemd-logind[1790]: Removed session 23. Mar 2 13:21:02.909473 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:44144.service - OpenSSH per-connection server daemon (10.200.16.10:44144). Mar 2 13:21:03.391259 sshd[6457]: Accepted publickey for core from 10.200.16.10 port 44144 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:21:03.392710 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:21:03.396985 systemd-logind[1790]: New session 24 of user core. Mar 2 13:21:03.404063 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:21:03.805707 sshd[6457]: pam_unix(sshd:session): session closed for user core Mar 2 13:21:03.809773 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:44144.service: Deactivated successfully. Mar 2 13:21:03.813591 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:21:03.814679 systemd-logind[1790]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:21:03.815764 systemd-logind[1790]: Removed session 24. Mar 2 13:21:08.899935 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:44148.service - OpenSSH per-connection server daemon (10.200.16.10:44148). Mar 2 13:21:09.386710 sshd[6471]: Accepted publickey for core from 10.200.16.10 port 44148 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:21:09.388199 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:21:09.392553 systemd-logind[1790]: New session 25 of user core. Mar 2 13:21:09.398474 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:21:09.801472 sshd[6471]: pam_unix(sshd:session): session closed for user core Mar 2 13:21:09.806718 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:44148.service: Deactivated successfully. Mar 2 13:21:09.810573 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:21:09.811499 systemd-logind[1790]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:21:09.812613 systemd-logind[1790]: Removed session 25. Mar 2 13:21:14.890608 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:37782.service - OpenSSH per-connection server daemon (10.200.16.10:37782). Mar 2 13:21:15.413352 sshd[6526]: Accepted publickey for core from 10.200.16.10 port 37782 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:21:15.439617 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:21:15.448845 systemd-logind[1790]: New session 26 of user core. Mar 2 13:21:15.456573 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:21:15.876511 sshd[6526]: pam_unix(sshd:session): session closed for user core Mar 2 13:21:15.881660 systemd-logind[1790]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:21:15.882576 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:37782.service: Deactivated successfully. Mar 2 13:21:15.885877 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:21:15.888023 systemd-logind[1790]: Removed session 26.