Apr 24 23:54:53.111661 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:54:53.111690 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:53.111705 kernel: BIOS-provided physical RAM map: Apr 24 23:54:53.111711 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:54:53.111717 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 24 23:54:53.111727 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 24 23:54:53.111735 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 24 23:54:53.111742 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 24 23:54:53.111754 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 24 23:54:53.111760 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 24 23:54:53.111766 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 24 23:54:53.111776 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 24 23:54:53.111783 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 24 23:54:53.111789 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 24 23:54:53.111803 kernel: printk: bootconsole [earlyser0] enabled Apr 24 23:54:53.111810 kernel: NX (Execute Disable) protection: active Apr 24 23:54:53.111817 kernel: APIC: Static calls initialized Apr 24 23:54:53.111829 kernel: efi: EFI v2.7 by Microsoft Apr 24 23:54:53.111836 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ee7f698 Apr 24 23:54:53.111843 kernel: SMBIOS 3.1.0 present. Apr 24 23:54:53.111854 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 24 23:54:53.111861 kernel: Hypervisor detected: Microsoft Hyper-V Apr 24 23:54:53.111870 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 24 23:54:53.111880 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 24 23:54:53.111887 kernel: Hyper-V: Nested features: 0x1e0101 Apr 24 23:54:53.111900 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 24 23:54:53.111907 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 24 23:54:53.111915 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:53.111926 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:53.111934 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 24 23:54:53.111941 kernel: tsc: Detected 2593.905 MHz processor Apr 24 23:54:53.111953 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:54:53.111960 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:54:53.111967 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 24 23:54:53.111979 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:54:53.111987 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:54:53.111994 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 24 23:54:53.112005 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 24 23:54:53.112013 kernel: Using GB pages for direct mapping Apr 24 23:54:53.112030 kernel: Secure boot disabled Apr 24 23:54:53.112043 kernel: ACPI: Early table checksum verification disabled Apr 24 23:54:53.112052 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 24 23:54:53.112064 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112071 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112082 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 24 23:54:53.112091 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 24 23:54:53.112098 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112110 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112120 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112128 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112140 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112147 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112159 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 24 23:54:53.112167 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 24 23:54:53.112174 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 24 23:54:53.112186 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 24 23:54:53.112193 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 24 23:54:53.112206 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 24 23:54:53.112215 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 24 23:54:53.112222 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 24 23:54:53.112234 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 24 23:54:53.112242 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:54:53.112250 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:54:53.112261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 24 23:54:53.112269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 24 23:54:53.112279 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 24 23:54:53.112290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 24 23:54:53.112297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 24 23:54:53.112309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 24 23:54:53.112317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 24 23:54:53.112324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 24 23:54:53.112336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 24 23:54:53.112343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 24 23:54:53.112354 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 24 23:54:53.112365 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 24 23:54:53.112384 kernel: Zone ranges: Apr 24 23:54:53.112392 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:54:53.112399 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:54:53.112411 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:53.112421 kernel: Movable zone start for each node Apr 24 23:54:53.112430 kernel: Early memory node ranges Apr 24 23:54:53.112441 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:54:53.112450 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 24 23:54:53.112466 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 24 23:54:53.112478 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 24 23:54:53.112486 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:53.112493 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 24 23:54:53.112500 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:54:53.112508 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:54:53.112515 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 24 23:54:53.112531 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 24 23:54:53.112544 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 24 23:54:53.112554 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 24 23:54:53.112561 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:54:53.112583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:54:53.112597 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:54:53.112604 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 24 23:54:53.112612 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:54:53.112631 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 24 23:54:53.112646 kernel: Booting paravirtualized kernel on Hyper-V Apr 24 23:54:53.112656 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:54:53.112666 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:54:53.112684 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:54:53.112701 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:54:53.112712 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:54:53.112719 kernel: Hyper-V: PV spinlocks enabled Apr 24 23:54:53.112728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:54:53.112748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:53.112763 kernel: random: crng init done Apr 24 23:54:53.112780 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 24 23:54:53.112789 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:54:53.112796 kernel: Fallback order for Node 0: 0 Apr 24 23:54:53.112808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 24 23:54:53.112822 kernel: Policy zone: Normal Apr 24 23:54:53.112838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:54:53.112849 kernel: software IO TLB: area num 2. Apr 24 23:54:53.112857 kernel: Memory: 8056444K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 326524K reserved, 0K cma-reserved) Apr 24 23:54:53.112867 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:54:53.112895 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:54:53.112903 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:54:53.112916 kernel: Dynamic Preempt: voluntary Apr 24 23:54:53.112939 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:54:53.112952 kernel: rcu: RCU event tracing is enabled. Apr 24 23:54:53.112960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:54:53.112979 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:54:53.112997 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:54:53.113011 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:54:53.113022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:54:53.113033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:54:53.113057 kernel: Using NULL legacy PIC Apr 24 23:54:53.113072 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 24 23:54:53.113083 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:54:53.113091 kernel: Console: colour dummy device 80x25 Apr 24 23:54:53.113101 kernel: printk: console [tty1] enabled Apr 24 23:54:53.113116 kernel: printk: console [ttyS0] enabled Apr 24 23:54:53.113135 kernel: printk: bootconsole [earlyser0] disabled Apr 24 23:54:53.113144 kernel: ACPI: Core revision 20230628 Apr 24 23:54:53.113152 kernel: Failed to register legacy timer interrupt Apr 24 23:54:53.113168 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:54:53.113184 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 24 23:54:53.113193 kernel: Hyper-V: Using IPI hypercalls Apr 24 23:54:53.113201 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 24 23:54:53.113216 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 24 23:54:53.113232 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 24 23:54:53.113250 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 24 23:54:53.113258 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 24 23:54:53.113266 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 24 23:54:53.113285 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Apr 24 23:54:53.113301 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:54:53.113310 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:54:53.113318 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:54:53.113332 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:54:53.113347 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:54:53.113356 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:54:53.113367 kernel: RETBleed: Vulnerable Apr 24 23:54:53.115416 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:54:53.115435 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:53.115451 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:53.115465 kernel: active return thunk: its_return_thunk Apr 24 23:54:53.115480 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:54:53.115494 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:54:53.115509 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:54:53.115524 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:54:53.115539 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:54:53.115558 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:54:53.115573 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:54:53.115588 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:54:53.115603 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:54:53.115618 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:54:53.115633 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:54:53.115648 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:54:53.115663 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:54:53.115678 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:54:53.115692 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:54:53.115707 kernel: landlock: Up and running. Apr 24 23:54:53.115719 kernel: SELinux: Initializing. Apr 24 23:54:53.115737 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.115753 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.115769 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 24 23:54:53.115786 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:53.115802 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:53.115819 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:53.115836 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:54:53.115852 kernel: signal: max sigframe size: 3632 Apr 24 23:54:53.115868 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:54:53.115887 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:54:53.115904 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:54:53.115919 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:54:53.115934 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:54:53.115949 kernel: .... node #0, CPUs: #1 Apr 24 23:54:53.115964 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 24 23:54:53.115981 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:54:53.115995 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:54:53.116010 kernel: smpboot: Max logical packages: 1 Apr 24 23:54:53.116027 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 24 23:54:53.116041 kernel: devtmpfs: initialized Apr 24 23:54:53.116056 kernel: x86/mm: Memory block size: 128MB Apr 24 23:54:53.116070 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 24 23:54:53.116085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:54:53.116100 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:54:53.116114 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:54:53.116129 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:54:53.116144 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:54:53.116161 kernel: audit: type=2000 audit(1777074891.029:1): state=initialized audit_enabled=0 res=1 Apr 24 23:54:53.116175 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:54:53.116190 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:54:53.116205 kernel: cpuidle: using governor menu Apr 24 23:54:53.116220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:54:53.116235 kernel: dca service started, version 1.12.1 Apr 24 23:54:53.116250 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 24 23:54:53.116265 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 24 23:54:53.116279 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:54:53.116297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:54:53.116312 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:54:53.116327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:54:53.116342 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:54:53.116357 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:54:53.118386 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:54:53.118406 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:54:53.118416 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:54:53.118432 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:54:53.118441 kernel: ACPI: Interpreter enabled Apr 24 23:54:53.118452 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:54:53.118461 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:54:53.118470 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:54:53.118482 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 24 23:54:53.118491 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 24 23:54:53.118509 kernel: iommu: Default domain type: Translated Apr 24 23:54:53.118519 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:54:53.118527 kernel: efivars: Registered efivars operations Apr 24 23:54:53.118542 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:54:53.118550 kernel: PCI: System does not support PCI Apr 24 23:54:53.118562 kernel: vgaarb: loaded Apr 24 23:54:53.118571 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 24 23:54:53.118579 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:54:53.118592 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:54:53.118600 kernel: pnp: PnP ACPI init Apr 24 23:54:53.118612 kernel: pnp: PnP ACPI: found 3 devices Apr 24 23:54:53.118621 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:54:53.118635 kernel: NET: Registered PF_INET protocol family Apr 24 23:54:53.118644 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:54:53.118657 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 24 23:54:53.118665 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:54:53.118673 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:54:53.118681 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 24 23:54:53.118694 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 24 23:54:53.118702 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.118715 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.118725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:54:53.118733 kernel: NET: Registered PF_XDP protocol family Apr 24 23:54:53.118741 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:54:53.118754 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:54:53.118762 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 24 23:54:53.118771 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:54:53.118783 kernel: Initialise system trusted keyrings Apr 24 23:54:53.118791 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 24 23:54:53.118814 kernel: Key type asymmetric registered Apr 24 23:54:53.118824 kernel: Asymmetric key parser 'x509' registered Apr 24 23:54:53.118832 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:54:53.118842 kernel: io scheduler mq-deadline registered Apr 24 23:54:53.118853 kernel: io scheduler kyber registered Apr 24 23:54:53.118861 kernel: io scheduler bfq registered Apr 24 23:54:53.118873 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:54:53.118882 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:54:53.118890 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:54:53.118902 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 24 23:54:53.118913 kernel: i8042: PNP: No PS/2 controller found. Apr 24 23:54:53.119064 kernel: rtc_cmos 00:02: registered as rtc0 Apr 24 23:54:53.119158 kernel: rtc_cmos 00:02: setting system clock to 2026-04-24T23:54:52 UTC (1777074892) Apr 24 23:54:53.119250 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 24 23:54:53.119265 kernel: intel_pstate: CPU model not supported Apr 24 23:54:53.119274 kernel: efifb: probing for efifb Apr 24 23:54:53.119286 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 24 23:54:53.119298 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 24 23:54:53.119306 kernel: efifb: scrolling: redraw Apr 24 23:54:53.119314 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:54:53.119327 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:54:53.119335 kernel: fb0: EFI VGA frame buffer device Apr 24 23:54:53.119348 kernel: pstore: Using crash dump compression: deflate Apr 24 23:54:53.119356 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:54:53.119364 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:54:53.119384 kernel: Segment Routing with IPv6 Apr 24 23:54:53.119398 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:54:53.119407 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:54:53.119415 kernel: Key type dns_resolver registered Apr 24 23:54:53.119428 kernel: IPI shorthand broadcast: enabled Apr 24 23:54:53.119436 kernel: sched_clock: Marking stable (861136700, 47681000)->(1131944700, -223127000) Apr 24 23:54:53.119447 kernel: registered taskstats version 1 Apr 24 23:54:53.119457 kernel: Loading compiled-in X.509 certificates Apr 24 23:54:53.119465 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:54:53.119477 kernel: Key type .fscrypt registered Apr 24 23:54:53.119487 kernel: Key type fscrypt-provisioning registered Apr 24 23:54:53.119500 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:54:53.119508 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:54:53.119519 kernel: ima: No architecture policies found Apr 24 23:54:53.119529 kernel: clk: Disabling unused clocks Apr 24 23:54:53.119537 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:54:53.119549 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:54:53.119558 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:54:53.119568 kernel: Run /init as init process Apr 24 23:54:53.119580 kernel: with arguments: Apr 24 23:54:53.119588 kernel: /init Apr 24 23:54:53.119601 kernel: with environment: Apr 24 23:54:53.119608 kernel: HOME=/ Apr 24 23:54:53.119621 kernel: TERM=linux Apr 24 23:54:53.119631 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:54:53.119646 systemd[1]: Detected virtualization microsoft. Apr 24 23:54:53.119655 systemd[1]: Detected architecture x86-64. Apr 24 23:54:53.119671 systemd[1]: Running in initrd. Apr 24 23:54:53.119679 systemd[1]: No hostname configured, using default hostname. Apr 24 23:54:53.119688 systemd[1]: Hostname set to . Apr 24 23:54:53.119701 systemd[1]: Initializing machine ID from random generator. Apr 24 23:54:53.119709 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:54:53.119722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:54:53.119731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:54:53.119744 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:54:53.119756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:54:53.119769 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:54:53.119781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:54:53.119792 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:54:53.119805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:54:53.119814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:54:53.119822 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:54:53.119833 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:54:53.119847 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:54:53.119855 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:54:53.119868 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:54:53.119877 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:54:53.119887 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:54:53.119898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:54:53.119908 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:54:53.119920 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:54:53.119932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:54:53.119944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:54:53.119953 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:54:53.119966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:54:53.119974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:54:53.119986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:54:53.119996 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:54:53.120004 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:54:53.120020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:54:53.120048 systemd-journald[177]: Collecting audit messages is disabled. Apr 24 23:54:53.120077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.120089 systemd-journald[177]: Journal started Apr 24 23:54:53.120121 systemd-journald[177]: Runtime Journal (/run/log/journal/68a028dfafe74380a2757dbc0cd6d351) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:54:53.128458 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:54:53.130624 systemd-modules-load[178]: Inserted module 'overlay' Apr 24 23:54:53.137867 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:54:53.144356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:54:53.150445 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:54:53.180312 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:54:53.180361 kernel: Bridge firewalling registered Apr 24 23:54:53.176578 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:54:53.182587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:54:53.192811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.200454 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 24 23:54:53.205581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:53.213761 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:54:53.216931 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:54:53.238527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:54:53.248513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:54:53.255429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:54:53.262168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:53.278683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:54:53.285545 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:54:53.292557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:54:53.300831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:54:53.316914 dracut-cmdline[209]: dracut-dracut-053 Apr 24 23:54:53.319938 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:53.358857 systemd-resolved[211]: Positive Trust Anchors: Apr 24 23:54:53.358873 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:54:53.358927 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:54:53.387544 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 24 23:54:53.388830 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:54:53.392087 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:54:53.406386 kernel: SCSI subsystem initialized Apr 24 23:54:53.416389 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:54:53.428400 kernel: iscsi: registered transport (tcp) Apr 24 23:54:53.449522 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:54:53.449598 kernel: QLogic iSCSI HBA Driver Apr 24 23:54:53.486186 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:54:53.498542 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:54:53.530941 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:54:53.531005 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:54:53.534344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:54:53.574394 kernel: raid6: avx512x4 gen() 18618 MB/s Apr 24 23:54:53.594389 kernel: raid6: avx512x2 gen() 18859 MB/s Apr 24 23:54:53.613383 kernel: raid6: avx512x1 gen() 18719 MB/s Apr 24 23:54:53.632382 kernel: raid6: avx2x4 gen() 18625 MB/s Apr 24 23:54:53.651388 kernel: raid6: avx2x2 gen() 18763 MB/s Apr 24 23:54:53.672433 kernel: raid6: avx2x1 gen() 14071 MB/s Apr 24 23:54:53.672461 kernel: raid6: using algorithm avx512x2 gen() 18859 MB/s Apr 24 23:54:53.693725 kernel: raid6: .... xor() 30469 MB/s, rmw enabled Apr 24 23:54:53.693746 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:54:53.716398 kernel: xor: automatically using best checksumming function avx Apr 24 23:54:53.865397 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:54:53.875392 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:54:53.890598 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:54:53.909222 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 24 23:54:53.913838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:54:53.935549 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:54:53.953079 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 24 23:54:53.983174 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:54:53.993618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:54:54.035067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:54:54.050659 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:54:54.075714 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:54:54.085049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:54:54.088942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:54:54.092914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:54:54.112583 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:54:54.138197 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:54:54.143469 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:54:54.148888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:54:54.152098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:54.166274 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:54.174897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:54.175068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:54.178724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:54.198313 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:54:54.198361 kernel: AES CTR mode by8 optimization enabled Apr 24 23:54:54.202396 kernel: hv_vmbus: Vmbus version:5.2 Apr 24 23:54:54.202962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:54.218137 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 24 23:54:54.218186 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 24 23:54:54.230637 kernel: PTP clock support registered Apr 24 23:54:54.231832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:54.234414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:54.246941 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 24 23:54:54.246975 kernel: hv_utils: Registering HyperV Utility Driver Apr 24 23:54:54.252395 kernel: hv_vmbus: registering driver hv_utils Apr 24 23:54:54.257256 kernel: hv_utils: Shutdown IC version 3.2 Apr 24 23:54:54.257302 kernel: hv_utils: Heartbeat IC version 3.0 Apr 24 23:54:54.260182 kernel: hv_utils: TimeSync IC version 4.0 Apr 24 23:54:54.164340 systemd-resolved[211]: Clock change detected. Flushing caches. Apr 24 23:54:54.176602 systemd-journald[177]: Time jumped backwards, rotating. Apr 24 23:54:54.165159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:54.199933 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 24 23:54:54.212134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:54.219112 kernel: hv_vmbus: registering driver hv_netvsc Apr 24 23:54:54.219139 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:54:54.228951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:54.250282 kernel: hv_vmbus: registering driver hv_storvsc Apr 24 23:54:54.259091 kernel: scsi host0: storvsc_host_t Apr 24 23:54:54.259307 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:54.259343 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:54.259375 kernel: scsi host1: storvsc_host_t Apr 24 23:54:54.259547 kernel: hv_vmbus: registering driver hid_hyperv Apr 24 23:54:54.268635 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 24 23:54:54.268683 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 24 23:54:54.285787 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:54.302774 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 24 23:54:54.303118 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:54:54.304820 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 24 23:54:54.318008 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 24 23:54:54.322529 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 24 23:54:54.322698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#258 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:54.322969 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 23:54:54.328216 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 24 23:54:54.328510 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 24 23:54:54.335236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.341969 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 23:54:54.352758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:54.418928 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: VF slot 1 added Apr 24 23:54:54.430577 kernel: hv_vmbus: registering driver hv_pci Apr 24 23:54:54.430639 kernel: hv_pci 0cd3d1a7-0ea4-44f2-b7ce-32d8c2884180: PCI VMBus probing: Using version 0x10004 Apr 24 23:54:54.438470 kernel: hv_pci 0cd3d1a7-0ea4-44f2-b7ce-32d8c2884180: PCI host bridge to bus 0ea4:00 Apr 24 23:54:54.438735 kernel: pci_bus 0ea4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 24 23:54:54.441912 kernel: pci_bus 0ea4:00: No busn resource found for root bus, will use [bus 00-ff] Apr 24 23:54:54.446780 kernel: pci 0ea4:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 24 23:54:54.452767 kernel: pci 0ea4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:54.456774 kernel: pci 0ea4:00:02.0: enabling Extended Tags Apr 24 23:54:54.469884 kernel: pci 0ea4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0ea4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 24 23:54:54.476782 kernel: pci_bus 0ea4:00: busn_res: [bus 00-ff] end is updated to 00 Apr 24 23:54:54.477037 kernel: pci 0ea4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:54.654112 kernel: mlx5_core 0ea4:00:02.0: enabling device (0000 -> 0002) Apr 24 23:54:54.658760 kernel: mlx5_core 0ea4:00:02.0: firmware version: 14.30.5026 Apr 24 23:54:54.792481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 24 23:54:54.815814 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Apr 24 23:54:54.827600 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (450) Apr 24 23:54:54.840151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 24 23:54:54.851709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:54:54.860461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 24 23:54:54.860592 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 24 23:54:54.871942 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:54:54.895760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.903762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.928120 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: VF registering: eth1 Apr 24 23:54:54.929097 kernel: mlx5_core 0ea4:00:02.0 eth1: joined to eth0 Apr 24 23:54:54.935833 kernel: mlx5_core 0ea4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 24 23:54:54.948764 kernel: mlx5_core 0ea4:00:02.0 enP3748s1: renamed from eth1 Apr 24 23:54:55.914808 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:55.916784 disk-uuid[613]: The operation has completed successfully. Apr 24 23:54:56.007368 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:54:56.007498 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:54:56.028868 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:54:56.036547 sh[727]: Success Apr 24 23:54:56.063823 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:54:56.307400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:54:56.319871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:54:56.325817 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:54:56.346756 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:54:56.346795 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:56.352670 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:54:56.356927 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:54:56.359795 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:54:56.584284 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:54:56.590059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:54:56.601960 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:54:56.610880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:54:56.641763 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.641810 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:56.641830 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:56.674762 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:56.685026 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:54:56.692366 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.696523 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:54:56.705922 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:54:56.713677 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:54:56.737910 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:54:56.762959 systemd-networkd[911]: lo: Link UP Apr 24 23:54:56.762970 systemd-networkd[911]: lo: Gained carrier Apr 24 23:54:56.765202 systemd-networkd[911]: Enumeration completed Apr 24 23:54:56.765284 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:54:56.766173 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.766176 systemd-networkd[911]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:54:56.768608 systemd[1]: Reached target network.target - Network. Apr 24 23:54:56.839764 kernel: mlx5_core 0ea4:00:02.0 enP3748s1: Link up Apr 24 23:54:56.867769 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: Data path switched to VF: enP3748s1 Apr 24 23:54:56.868123 systemd-networkd[911]: enP3748s1: Link UP Apr 24 23:54:56.868245 systemd-networkd[911]: eth0: Link UP Apr 24 23:54:56.868556 systemd-networkd[911]: eth0: Gained carrier Apr 24 23:54:56.868569 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.881371 systemd-networkd[911]: enP3748s1: Gained carrier Apr 24 23:54:56.912794 systemd-networkd[911]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:54:57.518845 ignition[896]: Ignition 2.19.0 Apr 24 23:54:57.518859 ignition[896]: Stage: fetch-offline Apr 24 23:54:57.518911 ignition[896]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.518923 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.519051 ignition[896]: parsed url from cmdline: "" Apr 24 23:54:57.519056 ignition[896]: no config URL provided Apr 24 23:54:57.519064 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:57.519075 ignition[896]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:57.519082 ignition[896]: failed to fetch config: resource requires networking Apr 24 23:54:57.521139 ignition[896]: Ignition finished successfully Apr 24 23:54:57.541710 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:54:57.552952 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:54:57.569531 ignition[919]: Ignition 2.19.0 Apr 24 23:54:57.569544 ignition[919]: Stage: fetch Apr 24 23:54:57.569773 ignition[919]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.569788 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.569896 ignition[919]: parsed url from cmdline: "" Apr 24 23:54:57.569899 ignition[919]: no config URL provided Apr 24 23:54:57.569904 ignition[919]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:57.569915 ignition[919]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:57.569937 ignition[919]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 24 23:54:57.670320 ignition[919]: GET result: OK Apr 24 23:54:57.670415 ignition[919]: config has been read from IMDS userdata Apr 24 23:54:57.670448 ignition[919]: parsing config with SHA512: 04c441818855d3f6bebfe1503e74decbc3c19fb2ba3a40d829cd955f5147115c96da34c566e38242c1e59f37da3bc0483c533361620c2ab77295b52ba930b83c Apr 24 23:54:57.678120 unknown[919]: fetched base config from "system" Apr 24 23:54:57.678499 unknown[919]: fetched base config from "system" Apr 24 23:54:57.679064 ignition[919]: fetch: fetch complete Apr 24 23:54:57.678505 unknown[919]: fetched user config from "azure" Apr 24 23:54:57.679074 ignition[919]: fetch: fetch passed Apr 24 23:54:57.679130 ignition[919]: Ignition finished successfully Apr 24 23:54:57.692180 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:54:57.702871 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:54:57.724285 ignition[925]: Ignition 2.19.0 Apr 24 23:54:57.724298 ignition[925]: Stage: kargs Apr 24 23:54:57.727001 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:54:57.724516 ignition[925]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.724530 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.725863 ignition[925]: kargs: kargs passed Apr 24 23:54:57.725913 ignition[925]: Ignition finished successfully Apr 24 23:54:57.747963 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:54:57.764357 ignition[931]: Ignition 2.19.0 Apr 24 23:54:57.764369 ignition[931]: Stage: disks Apr 24 23:54:57.766834 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:54:57.764592 ignition[931]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.770280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:54:57.764605 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.774863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:54:57.765474 ignition[931]: disks: disks passed Apr 24 23:54:57.778486 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:54:57.765517 ignition[931]: Ignition finished successfully Apr 24 23:54:57.784263 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:54:57.784400 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:54:57.800935 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:54:57.870096 systemd-fsck[939]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 24 23:54:57.875668 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:54:57.892900 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:54:57.939816 systemd-networkd[911]: eth0: Gained IPv6LL Apr 24 23:54:57.987759 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:54:57.988671 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:54:57.991872 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:54:58.026881 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:58.041760 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (950) Apr 24 23:54:58.046758 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:58.058476 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:58.058525 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:58.058856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:54:58.066211 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:54:58.072293 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:58.071825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:54:58.071857 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:54:58.089051 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:58.091912 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:54:58.105892 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:54:58.591246 coreos-metadata[965]: Apr 24 23:54:58.591 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:54:58.595832 coreos-metadata[965]: Apr 24 23:54:58.595 INFO Fetch successful Apr 24 23:54:58.598758 coreos-metadata[965]: Apr 24 23:54:58.595 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:54:58.606486 coreos-metadata[965]: Apr 24 23:54:58.606 INFO Fetch successful Apr 24 23:54:58.632814 coreos-metadata[965]: Apr 24 23:54:58.632 INFO wrote hostname ci-4081.3.6-n-bfbb2fd0ff to /sysroot/etc/hostname Apr 24 23:54:58.638226 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:54:58.693688 initrd-setup-root[980]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:54:58.724500 initrd-setup-root[987]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:54:58.744617 initrd-setup-root[994]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:54:58.750651 initrd-setup-root[1001]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:54:59.501763 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:54:59.512897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:54:59.518643 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:54:59.531299 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:54:59.538218 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.558948 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:54:59.572168 ignition[1074]: INFO : Ignition 2.19.0 Apr 24 23:54:59.572168 ignition[1074]: INFO : Stage: mount Apr 24 23:54:59.581661 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.581661 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.581661 ignition[1074]: INFO : mount: mount passed Apr 24 23:54:59.581661 ignition[1074]: INFO : Ignition finished successfully Apr 24 23:54:59.576589 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:54:59.596921 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:54:59.605844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:59.628756 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Apr 24 23:54:59.636028 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.636074 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:59.639237 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:59.646764 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:59.648110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:59.678728 ignition[1101]: INFO : Ignition 2.19.0 Apr 24 23:54:59.678728 ignition[1101]: INFO : Stage: files Apr 24 23:54:59.685148 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.685148 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.693860 ignition[1101]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:54:59.701035 ignition[1101]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:54:59.701035 ignition[1101]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:54:59.788458 ignition[1101]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:54:59.793302 ignition[1101]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:54:59.793302 ignition[1101]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:54:59.789487 unknown[1101]: wrote ssh authorized keys file for user: core Apr 24 23:54:59.846913 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:54:59.852268 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:54:59.852268 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.852268 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:54:59.912850 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:54:59.987890 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:55:00.285235 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 23:55:00.661753 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:55:00.661753 ignition[1101]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 24 23:55:00.690176 ignition[1101]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:55:00.697219 ignition[1101]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:55:00.697219 ignition[1101]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 24 23:55:00.697219 ignition[1101]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.733362 ignition[1101]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.737382 ignition[1101]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.737382 ignition[1101]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.737382 ignition[1101]: INFO : files: files passed Apr 24 23:55:00.737382 ignition[1101]: INFO : Ignition finished successfully Apr 24 23:55:00.749418 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:55:00.764933 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:55:00.772464 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:55:00.780721 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:55:00.780886 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:55:00.799357 initrd-setup-root-after-ignition[1130]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.799357 initrd-setup-root-after-ignition[1130]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.809933 initrd-setup-root-after-ignition[1134]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.815977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:00.821769 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.837956 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:55:00.868153 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:55:00.868282 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:55:00.875002 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:55:00.885181 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:55:00.888132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:55:00.902879 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:55:00.916083 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.929882 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:55:00.943275 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:55:00.943554 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:55:00.944081 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:55:00.944672 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:55:00.944788 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.945646 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:55:00.946661 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:55:00.947130 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.947753 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:55:00.948223 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:55:00.948707 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:55:00.949192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:55:00.950157 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:55:00.950638 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:55:00.951117 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:55:00.951542 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:55:00.951679 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:55:00.952548 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:55:00.953044 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:55:00.953463 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:55:00.997752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:55:01.001561 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:55:01.001729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:55:01.066332 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:55:01.069785 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:01.077348 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:55:01.077492 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:55:01.085619 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:55:01.088841 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:55:01.100953 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:55:01.105029 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:55:01.111894 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:55:01.114460 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:55:01.118717 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:55:01.118893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:55:01.136725 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:55:01.139862 ignition[1154]: INFO : Ignition 2.19.0 Apr 24 23:55:01.139862 ignition[1154]: INFO : Stage: umount Apr 24 23:55:01.156426 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:55:01.156426 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:55:01.156426 ignition[1154]: INFO : umount: umount passed Apr 24 23:55:01.156426 ignition[1154]: INFO : Ignition finished successfully Apr 24 23:55:01.139869 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:55:01.146364 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:55:01.146447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:55:01.150126 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:55:01.150235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:55:01.156458 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:55:01.159423 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:55:01.186666 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:55:01.186762 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:55:01.190001 systemd[1]: Stopped target network.target - Network. Apr 24 23:55:01.190090 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:55:01.190147 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:55:01.190595 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:55:01.191056 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:55:01.200145 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:55:01.206319 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:55:01.210878 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:55:01.213417 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:55:01.213461 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:55:01.213563 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:55:01.213608 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:55:01.214167 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:55:01.214212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:55:01.214613 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:55:01.214648 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:55:01.217450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:55:01.217872 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:55:01.219950 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:55:01.220516 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:55:01.220620 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:55:01.248169 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:55:01.248282 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:55:01.250852 systemd-networkd[911]: eth0: DHCPv6 lease lost Apr 24 23:55:01.256145 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:55:01.256316 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:55:01.261674 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:55:01.265767 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:55:01.282933 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:55:01.291213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:55:01.291313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:55:01.297607 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:55:01.300929 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:55:01.301030 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:55:01.319087 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:55:01.319165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:55:01.324201 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:55:01.324258 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:55:01.364237 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: Data path switched from VF: enP3748s1 Apr 24 23:55:01.364271 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:55:01.364346 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:55:01.376357 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:55:01.378914 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:55:01.386673 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:55:01.386768 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:55:01.393684 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:55:01.393736 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:55:01.399826 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:55:01.399893 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:55:01.413904 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:55:01.413974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:55:01.422121 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:55:01.422189 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:55:01.437874 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:55:01.441812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:55:01.441869 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:55:01.450264 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:55:01.450306 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:55:01.453820 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:55:01.453870 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:55:01.460233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:01.460301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:01.463592 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:55:01.463699 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:55:01.469093 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:55:01.469182 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:55:01.476561 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:55:01.494511 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:55:01.616815 systemd[1]: Switching root. Apr 24 23:55:01.650348 systemd-journald[177]: Journal stopped Apr 24 23:54:53.111661 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:54:53.111690 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:53.111705 kernel: BIOS-provided physical RAM map: Apr 24 23:54:53.111711 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:54:53.111717 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 24 23:54:53.111727 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 24 23:54:53.111735 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 24 23:54:53.111742 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 24 23:54:53.111754 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 24 23:54:53.111760 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 24 23:54:53.111766 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 24 23:54:53.111776 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 24 23:54:53.111783 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 24 23:54:53.111789 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 24 23:54:53.111803 kernel: printk: bootconsole [earlyser0] enabled Apr 24 23:54:53.111810 kernel: NX (Execute Disable) protection: active Apr 24 23:54:53.111817 kernel: APIC: Static calls initialized Apr 24 23:54:53.111829 kernel: efi: EFI v2.7 by Microsoft Apr 24 23:54:53.111836 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ee7f698 Apr 24 23:54:53.111843 kernel: SMBIOS 3.1.0 present. Apr 24 23:54:53.111854 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 24 23:54:53.111861 kernel: Hypervisor detected: Microsoft Hyper-V Apr 24 23:54:53.111870 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 24 23:54:53.111880 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 24 23:54:53.111887 kernel: Hyper-V: Nested features: 0x1e0101 Apr 24 23:54:53.111900 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 24 23:54:53.111907 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 24 23:54:53.111915 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:53.111926 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:54:53.111934 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 24 23:54:53.111941 kernel: tsc: Detected 2593.905 MHz processor Apr 24 23:54:53.111953 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:54:53.111960 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:54:53.111967 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 24 23:54:53.111979 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:54:53.111987 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:54:53.111994 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 24 23:54:53.112005 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 24 23:54:53.112013 kernel: Using GB pages for direct mapping Apr 24 23:54:53.112030 kernel: Secure boot disabled Apr 24 23:54:53.112043 kernel: ACPI: Early table checksum verification disabled Apr 24 23:54:53.112052 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 24 23:54:53.112064 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112071 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112082 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 24 23:54:53.112091 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 24 23:54:53.112098 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112110 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112120 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112128 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112140 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112147 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:54:53.112159 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 24 23:54:53.112167 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 24 23:54:53.112174 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 24 23:54:53.112186 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 24 23:54:53.112193 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 24 23:54:53.112206 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 24 23:54:53.112215 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 24 23:54:53.112222 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 24 23:54:53.112234 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 24 23:54:53.112242 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:54:53.112250 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:54:53.112261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 24 23:54:53.112269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 24 23:54:53.112279 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 24 23:54:53.112290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 24 23:54:53.112297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 24 23:54:53.112309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 24 23:54:53.112317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 24 23:54:53.112324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 24 23:54:53.112336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 24 23:54:53.112343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 24 23:54:53.112354 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 24 23:54:53.112365 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 24 23:54:53.112384 kernel: Zone ranges: Apr 24 23:54:53.112392 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:54:53.112399 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:54:53.112411 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:53.112421 kernel: Movable zone start for each node Apr 24 23:54:53.112430 kernel: Early memory node ranges Apr 24 23:54:53.112441 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:54:53.112450 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 24 23:54:53.112466 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 24 23:54:53.112478 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 24 23:54:53.112486 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:54:53.112493 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 24 23:54:53.112500 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:54:53.112508 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:54:53.112515 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 24 23:54:53.112531 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 24 23:54:53.112544 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 24 23:54:53.112554 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 24 23:54:53.112561 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:54:53.112583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:54:53.112597 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:54:53.112604 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 24 23:54:53.112612 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:54:53.112631 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 24 23:54:53.112646 kernel: Booting paravirtualized kernel on Hyper-V Apr 24 23:54:53.112656 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:54:53.112666 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:54:53.112684 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:54:53.112701 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:54:53.112712 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:54:53.112719 kernel: Hyper-V: PV spinlocks enabled Apr 24 23:54:53.112728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:54:53.112748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:53.112763 kernel: random: crng init done Apr 24 23:54:53.112780 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 24 23:54:53.112789 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:54:53.112796 kernel: Fallback order for Node 0: 0 Apr 24 23:54:53.112808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 24 23:54:53.112822 kernel: Policy zone: Normal Apr 24 23:54:53.112838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:54:53.112849 kernel: software IO TLB: area num 2. Apr 24 23:54:53.112857 kernel: Memory: 8056444K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 326524K reserved, 0K cma-reserved) Apr 24 23:54:53.112867 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:54:53.112895 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:54:53.112903 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:54:53.112916 kernel: Dynamic Preempt: voluntary Apr 24 23:54:53.112939 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:54:53.112952 kernel: rcu: RCU event tracing is enabled. Apr 24 23:54:53.112960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:54:53.112979 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:54:53.112997 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:54:53.113011 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:54:53.113022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:54:53.113033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:54:53.113057 kernel: Using NULL legacy PIC Apr 24 23:54:53.113072 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 24 23:54:53.113083 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:54:53.113091 kernel: Console: colour dummy device 80x25 Apr 24 23:54:53.113101 kernel: printk: console [tty1] enabled Apr 24 23:54:53.113116 kernel: printk: console [ttyS0] enabled Apr 24 23:54:53.113135 kernel: printk: bootconsole [earlyser0] disabled Apr 24 23:54:53.113144 kernel: ACPI: Core revision 20230628 Apr 24 23:54:53.113152 kernel: Failed to register legacy timer interrupt Apr 24 23:54:53.113168 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:54:53.113184 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 24 23:54:53.113193 kernel: Hyper-V: Using IPI hypercalls Apr 24 23:54:53.113201 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 24 23:54:53.113216 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 24 23:54:53.113232 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 24 23:54:53.113250 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 24 23:54:53.113258 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 24 23:54:53.113266 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 24 23:54:53.113285 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Apr 24 23:54:53.113301 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:54:53.113310 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:54:53.113318 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:54:53.113332 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:54:53.113347 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:54:53.113356 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:54:53.113367 kernel: RETBleed: Vulnerable Apr 24 23:54:53.115416 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:54:53.115435 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:53.115451 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:54:53.115465 kernel: active return thunk: its_return_thunk Apr 24 23:54:53.115480 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:54:53.115494 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:54:53.115509 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:54:53.115524 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:54:53.115539 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:54:53.115558 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:54:53.115573 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:54:53.115588 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:54:53.115603 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:54:53.115618 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:54:53.115633 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:54:53.115648 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:54:53.115663 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:54:53.115678 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:54:53.115692 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:54:53.115707 kernel: landlock: Up and running. Apr 24 23:54:53.115719 kernel: SELinux: Initializing. Apr 24 23:54:53.115737 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.115753 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.115769 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 24 23:54:53.115786 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:53.115802 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:53.115819 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:54:53.115836 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:54:53.115852 kernel: signal: max sigframe size: 3632 Apr 24 23:54:53.115868 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:54:53.115887 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:54:53.115904 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:54:53.115919 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:54:53.115934 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:54:53.115949 kernel: .... node #0, CPUs: #1 Apr 24 23:54:53.115964 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 24 23:54:53.115981 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:54:53.115995 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:54:53.116010 kernel: smpboot: Max logical packages: 1 Apr 24 23:54:53.116027 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 24 23:54:53.116041 kernel: devtmpfs: initialized Apr 24 23:54:53.116056 kernel: x86/mm: Memory block size: 128MB Apr 24 23:54:53.116070 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 24 23:54:53.116085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:54:53.116100 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:54:53.116114 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:54:53.116129 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:54:53.116144 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:54:53.116161 kernel: audit: type=2000 audit(1777074891.029:1): state=initialized audit_enabled=0 res=1 Apr 24 23:54:53.116175 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:54:53.116190 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:54:53.116205 kernel: cpuidle: using governor menu Apr 24 23:54:53.116220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:54:53.116235 kernel: dca service started, version 1.12.1 Apr 24 23:54:53.116250 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 24 23:54:53.116265 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 24 23:54:53.116279 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:54:53.116297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:54:53.116312 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:54:53.116327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:54:53.116342 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:54:53.116357 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:54:53.118386 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:54:53.118406 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:54:53.118416 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:54:53.118432 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:54:53.118441 kernel: ACPI: Interpreter enabled Apr 24 23:54:53.118452 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:54:53.118461 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:54:53.118470 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:54:53.118482 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 24 23:54:53.118491 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 24 23:54:53.118509 kernel: iommu: Default domain type: Translated Apr 24 23:54:53.118519 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:54:53.118527 kernel: efivars: Registered efivars operations Apr 24 23:54:53.118542 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:54:53.118550 kernel: PCI: System does not support PCI Apr 24 23:54:53.118562 kernel: vgaarb: loaded Apr 24 23:54:53.118571 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 24 23:54:53.118579 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:54:53.118592 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:54:53.118600 kernel: pnp: PnP ACPI init Apr 24 23:54:53.118612 kernel: pnp: PnP ACPI: found 3 devices Apr 24 23:54:53.118621 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:54:53.118635 kernel: NET: Registered PF_INET protocol family Apr 24 23:54:53.118644 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:54:53.118657 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 24 23:54:53.118665 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:54:53.118673 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:54:53.118681 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 24 23:54:53.118694 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 24 23:54:53.118702 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.118715 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:54:53.118725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:54:53.118733 kernel: NET: Registered PF_XDP protocol family Apr 24 23:54:53.118741 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:54:53.118754 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:54:53.118762 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 24 23:54:53.118771 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:54:53.118783 kernel: Initialise system trusted keyrings Apr 24 23:54:53.118791 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 24 23:54:53.118814 kernel: Key type asymmetric registered Apr 24 23:54:53.118824 kernel: Asymmetric key parser 'x509' registered Apr 24 23:54:53.118832 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:54:53.118842 kernel: io scheduler mq-deadline registered Apr 24 23:54:53.118853 kernel: io scheduler kyber registered Apr 24 23:54:53.118861 kernel: io scheduler bfq registered Apr 24 23:54:53.118873 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:54:53.118882 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:54:53.118890 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:54:53.118902 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 24 23:54:53.118913 kernel: i8042: PNP: No PS/2 controller found. Apr 24 23:54:53.119064 kernel: rtc_cmos 00:02: registered as rtc0 Apr 24 23:54:53.119158 kernel: rtc_cmos 00:02: setting system clock to 2026-04-24T23:54:52 UTC (1777074892) Apr 24 23:54:53.119250 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 24 23:54:53.119265 kernel: intel_pstate: CPU model not supported Apr 24 23:54:53.119274 kernel: efifb: probing for efifb Apr 24 23:54:53.119286 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 24 23:54:53.119298 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 24 23:54:53.119306 kernel: efifb: scrolling: redraw Apr 24 23:54:53.119314 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:54:53.119327 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:54:53.119335 kernel: fb0: EFI VGA frame buffer device Apr 24 23:54:53.119348 kernel: pstore: Using crash dump compression: deflate Apr 24 23:54:53.119356 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:54:53.119364 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:54:53.119384 kernel: Segment Routing with IPv6 Apr 24 23:54:53.119398 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:54:53.119407 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:54:53.119415 kernel: Key type dns_resolver registered Apr 24 23:54:53.119428 kernel: IPI shorthand broadcast: enabled Apr 24 23:54:53.119436 kernel: sched_clock: Marking stable (861136700, 47681000)->(1131944700, -223127000) Apr 24 23:54:53.119447 kernel: registered taskstats version 1 Apr 24 23:54:53.119457 kernel: Loading compiled-in X.509 certificates Apr 24 23:54:53.119465 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:54:53.119477 kernel: Key type .fscrypt registered Apr 24 23:54:53.119487 kernel: Key type fscrypt-provisioning registered Apr 24 23:54:53.119500 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:54:53.119508 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:54:53.119519 kernel: ima: No architecture policies found Apr 24 23:54:53.119529 kernel: clk: Disabling unused clocks Apr 24 23:54:53.119537 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:54:53.119549 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:54:53.119558 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:54:53.119568 kernel: Run /init as init process Apr 24 23:54:53.119580 kernel: with arguments: Apr 24 23:54:53.119588 kernel: /init Apr 24 23:54:53.119601 kernel: with environment: Apr 24 23:54:53.119608 kernel: HOME=/ Apr 24 23:54:53.119621 kernel: TERM=linux Apr 24 23:54:53.119631 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:54:53.119646 systemd[1]: Detected virtualization microsoft. Apr 24 23:54:53.119655 systemd[1]: Detected architecture x86-64. Apr 24 23:54:53.119671 systemd[1]: Running in initrd. Apr 24 23:54:53.119679 systemd[1]: No hostname configured, using default hostname. Apr 24 23:54:53.119688 systemd[1]: Hostname set to . Apr 24 23:54:53.119701 systemd[1]: Initializing machine ID from random generator. Apr 24 23:54:53.119709 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:54:53.119722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:54:53.119731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:54:53.119744 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:54:53.119756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:54:53.119769 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:54:53.119781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:54:53.119792 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:54:53.119805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:54:53.119814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:54:53.119822 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:54:53.119833 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:54:53.119847 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:54:53.119855 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:54:53.119868 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:54:53.119877 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:54:53.119887 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:54:53.119898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:54:53.119908 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:54:53.119920 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:54:53.119932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:54:53.119944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:54:53.119953 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:54:53.119966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:54:53.119974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:54:53.119986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:54:53.119996 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:54:53.120004 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:54:53.120020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:54:53.120048 systemd-journald[177]: Collecting audit messages is disabled. Apr 24 23:54:53.120077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:53.120089 systemd-journald[177]: Journal started Apr 24 23:54:53.120121 systemd-journald[177]: Runtime Journal (/run/log/journal/68a028dfafe74380a2757dbc0cd6d351) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:54:53.128458 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:54:53.130624 systemd-modules-load[178]: Inserted module 'overlay' Apr 24 23:54:53.137867 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:54:53.144356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:54:53.150445 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:54:53.180312 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:54:53.180361 kernel: Bridge firewalling registered Apr 24 23:54:53.176578 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:54:53.182587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:54:53.192811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:53.200454 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 24 23:54:53.205581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:53.213761 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:54:53.216931 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:54:53.238527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:54:53.248513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:54:53.255429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:54:53.262168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:53.278683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:54:53.285545 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:54:53.292557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:54:53.300831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:54:53.316914 dracut-cmdline[209]: dracut-dracut-053 Apr 24 23:54:53.319938 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:54:53.358857 systemd-resolved[211]: Positive Trust Anchors: Apr 24 23:54:53.358873 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:54:53.358927 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:54:53.387544 systemd-resolved[211]: Defaulting to hostname 'linux'. Apr 24 23:54:53.388830 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:54:53.392087 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:54:53.406386 kernel: SCSI subsystem initialized Apr 24 23:54:53.416389 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:54:53.428400 kernel: iscsi: registered transport (tcp) Apr 24 23:54:53.449522 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:54:53.449598 kernel: QLogic iSCSI HBA Driver Apr 24 23:54:53.486186 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:54:53.498542 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:54:53.530941 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:54:53.531005 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:54:53.534344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:54:53.574394 kernel: raid6: avx512x4 gen() 18618 MB/s Apr 24 23:54:53.594389 kernel: raid6: avx512x2 gen() 18859 MB/s Apr 24 23:54:53.613383 kernel: raid6: avx512x1 gen() 18719 MB/s Apr 24 23:54:53.632382 kernel: raid6: avx2x4 gen() 18625 MB/s Apr 24 23:54:53.651388 kernel: raid6: avx2x2 gen() 18763 MB/s Apr 24 23:54:53.672433 kernel: raid6: avx2x1 gen() 14071 MB/s Apr 24 23:54:53.672461 kernel: raid6: using algorithm avx512x2 gen() 18859 MB/s Apr 24 23:54:53.693725 kernel: raid6: .... xor() 30469 MB/s, rmw enabled Apr 24 23:54:53.693746 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:54:53.716398 kernel: xor: automatically using best checksumming function avx Apr 24 23:54:53.865397 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:54:53.875392 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:54:53.890598 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:54:53.909222 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 24 23:54:53.913838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:54:53.935549 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:54:53.953079 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 24 23:54:53.983174 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:54:53.993618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:54:54.035067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:54:54.050659 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:54:54.075714 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:54:54.085049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:54:54.088942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:54:54.092914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:54:54.112583 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:54:54.138197 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:54:54.143469 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:54:54.148888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:54:54.152098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:54.166274 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:54.174897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:54.175068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:54.178724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:54.198313 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:54:54.198361 kernel: AES CTR mode by8 optimization enabled Apr 24 23:54:54.202396 kernel: hv_vmbus: Vmbus version:5.2 Apr 24 23:54:54.202962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:54.218137 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 24 23:54:54.218186 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 24 23:54:54.230637 kernel: PTP clock support registered Apr 24 23:54:54.231832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:54:54.234414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:54.246941 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 24 23:54:54.246975 kernel: hv_utils: Registering HyperV Utility Driver Apr 24 23:54:54.252395 kernel: hv_vmbus: registering driver hv_utils Apr 24 23:54:54.257256 kernel: hv_utils: Shutdown IC version 3.2 Apr 24 23:54:54.257302 kernel: hv_utils: Heartbeat IC version 3.0 Apr 24 23:54:54.260182 kernel: hv_utils: TimeSync IC version 4.0 Apr 24 23:54:54.164340 systemd-resolved[211]: Clock change detected. Flushing caches. Apr 24 23:54:54.176602 systemd-journald[177]: Time jumped backwards, rotating. Apr 24 23:54:54.165159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:54:54.199933 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 24 23:54:54.212134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:54:54.219112 kernel: hv_vmbus: registering driver hv_netvsc Apr 24 23:54:54.219139 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:54:54.228951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:54:54.250282 kernel: hv_vmbus: registering driver hv_storvsc Apr 24 23:54:54.259091 kernel: scsi host0: storvsc_host_t Apr 24 23:54:54.259307 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:54.259343 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 24 23:54:54.259375 kernel: scsi host1: storvsc_host_t Apr 24 23:54:54.259547 kernel: hv_vmbus: registering driver hid_hyperv Apr 24 23:54:54.268635 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 24 23:54:54.268683 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 24 23:54:54.285787 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:54:54.302774 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 24 23:54:54.303118 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:54:54.304820 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 24 23:54:54.318008 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 24 23:54:54.322529 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 24 23:54:54.322698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#258 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:54.322969 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 23:54:54.328216 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 24 23:54:54.328510 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 24 23:54:54.335236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.341969 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 23:54:54.352758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:54:54.418928 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: VF slot 1 added Apr 24 23:54:54.430577 kernel: hv_vmbus: registering driver hv_pci Apr 24 23:54:54.430639 kernel: hv_pci 0cd3d1a7-0ea4-44f2-b7ce-32d8c2884180: PCI VMBus probing: Using version 0x10004 Apr 24 23:54:54.438470 kernel: hv_pci 0cd3d1a7-0ea4-44f2-b7ce-32d8c2884180: PCI host bridge to bus 0ea4:00 Apr 24 23:54:54.438735 kernel: pci_bus 0ea4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 24 23:54:54.441912 kernel: pci_bus 0ea4:00: No busn resource found for root bus, will use [bus 00-ff] Apr 24 23:54:54.446780 kernel: pci 0ea4:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 24 23:54:54.452767 kernel: pci 0ea4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:54.456774 kernel: pci 0ea4:00:02.0: enabling Extended Tags Apr 24 23:54:54.469884 kernel: pci 0ea4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0ea4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 24 23:54:54.476782 kernel: pci_bus 0ea4:00: busn_res: [bus 00-ff] end is updated to 00 Apr 24 23:54:54.477037 kernel: pci 0ea4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:54:54.654112 kernel: mlx5_core 0ea4:00:02.0: enabling device (0000 -> 0002) Apr 24 23:54:54.658760 kernel: mlx5_core 0ea4:00:02.0: firmware version: 14.30.5026 Apr 24 23:54:54.792481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 24 23:54:54.815814 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Apr 24 23:54:54.827600 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (450) Apr 24 23:54:54.840151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 24 23:54:54.851709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:54:54.860461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 24 23:54:54.860592 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 24 23:54:54.871942 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:54:54.895760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.903762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:54.928120 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: VF registering: eth1 Apr 24 23:54:54.929097 kernel: mlx5_core 0ea4:00:02.0 eth1: joined to eth0 Apr 24 23:54:54.935833 kernel: mlx5_core 0ea4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 24 23:54:54.948764 kernel: mlx5_core 0ea4:00:02.0 enP3748s1: renamed from eth1 Apr 24 23:54:55.914808 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:54:55.916784 disk-uuid[613]: The operation has completed successfully. Apr 24 23:54:56.007368 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:54:56.007498 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:54:56.028868 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:54:56.036547 sh[727]: Success Apr 24 23:54:56.063823 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:54:56.307400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:54:56.319871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:54:56.325817 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:54:56.346756 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:54:56.346795 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:56.352670 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:54:56.356927 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:54:56.359795 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:54:56.584284 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:54:56.590059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:54:56.601960 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:54:56.610880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:54:56.641763 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.641810 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:56.641830 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:56.674762 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:56.685026 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:54:56.692366 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:56.696523 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:54:56.705922 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:54:56.713677 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:54:56.737910 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:54:56.762959 systemd-networkd[911]: lo: Link UP Apr 24 23:54:56.762970 systemd-networkd[911]: lo: Gained carrier Apr 24 23:54:56.765202 systemd-networkd[911]: Enumeration completed Apr 24 23:54:56.765284 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:54:56.766173 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.766176 systemd-networkd[911]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:54:56.768608 systemd[1]: Reached target network.target - Network. Apr 24 23:54:56.839764 kernel: mlx5_core 0ea4:00:02.0 enP3748s1: Link up Apr 24 23:54:56.867769 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: Data path switched to VF: enP3748s1 Apr 24 23:54:56.868123 systemd-networkd[911]: enP3748s1: Link UP Apr 24 23:54:56.868245 systemd-networkd[911]: eth0: Link UP Apr 24 23:54:56.868556 systemd-networkd[911]: eth0: Gained carrier Apr 24 23:54:56.868569 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:54:56.881371 systemd-networkd[911]: enP3748s1: Gained carrier Apr 24 23:54:56.912794 systemd-networkd[911]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:54:57.518845 ignition[896]: Ignition 2.19.0 Apr 24 23:54:57.518859 ignition[896]: Stage: fetch-offline Apr 24 23:54:57.518911 ignition[896]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.518923 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.519051 ignition[896]: parsed url from cmdline: "" Apr 24 23:54:57.519056 ignition[896]: no config URL provided Apr 24 23:54:57.519064 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:57.519075 ignition[896]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:57.519082 ignition[896]: failed to fetch config: resource requires networking Apr 24 23:54:57.521139 ignition[896]: Ignition finished successfully Apr 24 23:54:57.541710 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:54:57.552952 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:54:57.569531 ignition[919]: Ignition 2.19.0 Apr 24 23:54:57.569544 ignition[919]: Stage: fetch Apr 24 23:54:57.569773 ignition[919]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.569788 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.569896 ignition[919]: parsed url from cmdline: "" Apr 24 23:54:57.569899 ignition[919]: no config URL provided Apr 24 23:54:57.569904 ignition[919]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:54:57.569915 ignition[919]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:54:57.569937 ignition[919]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 24 23:54:57.670320 ignition[919]: GET result: OK Apr 24 23:54:57.670415 ignition[919]: config has been read from IMDS userdata Apr 24 23:54:57.670448 ignition[919]: parsing config with SHA512: 04c441818855d3f6bebfe1503e74decbc3c19fb2ba3a40d829cd955f5147115c96da34c566e38242c1e59f37da3bc0483c533361620c2ab77295b52ba930b83c Apr 24 23:54:57.678120 unknown[919]: fetched base config from "system" Apr 24 23:54:57.678499 unknown[919]: fetched base config from "system" Apr 24 23:54:57.679064 ignition[919]: fetch: fetch complete Apr 24 23:54:57.678505 unknown[919]: fetched user config from "azure" Apr 24 23:54:57.679074 ignition[919]: fetch: fetch passed Apr 24 23:54:57.679130 ignition[919]: Ignition finished successfully Apr 24 23:54:57.692180 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:54:57.702871 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:54:57.724285 ignition[925]: Ignition 2.19.0 Apr 24 23:54:57.724298 ignition[925]: Stage: kargs Apr 24 23:54:57.727001 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:54:57.724516 ignition[925]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.724530 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.725863 ignition[925]: kargs: kargs passed Apr 24 23:54:57.725913 ignition[925]: Ignition finished successfully Apr 24 23:54:57.747963 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:54:57.764357 ignition[931]: Ignition 2.19.0 Apr 24 23:54:57.764369 ignition[931]: Stage: disks Apr 24 23:54:57.766834 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:54:57.764592 ignition[931]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:57.770280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:54:57.764605 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:57.774863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:54:57.765474 ignition[931]: disks: disks passed Apr 24 23:54:57.778486 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:54:57.765517 ignition[931]: Ignition finished successfully Apr 24 23:54:57.784263 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:54:57.784400 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:54:57.800935 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:54:57.870096 systemd-fsck[939]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 24 23:54:57.875668 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:54:57.892900 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:54:57.939816 systemd-networkd[911]: eth0: Gained IPv6LL Apr 24 23:54:57.987759 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:54:57.988671 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:54:57.991872 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:54:58.026881 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:58.041760 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (950) Apr 24 23:54:58.046758 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:58.058476 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:58.058525 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:58.058856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:54:58.066211 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:54:58.072293 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:58.071825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:54:58.071857 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:54:58.089051 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:58.091912 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:54:58.105892 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:54:58.591246 coreos-metadata[965]: Apr 24 23:54:58.591 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:54:58.595832 coreos-metadata[965]: Apr 24 23:54:58.595 INFO Fetch successful Apr 24 23:54:58.598758 coreos-metadata[965]: Apr 24 23:54:58.595 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:54:58.606486 coreos-metadata[965]: Apr 24 23:54:58.606 INFO Fetch successful Apr 24 23:54:58.632814 coreos-metadata[965]: Apr 24 23:54:58.632 INFO wrote hostname ci-4081.3.6-n-bfbb2fd0ff to /sysroot/etc/hostname Apr 24 23:54:58.638226 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:54:58.693688 initrd-setup-root[980]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:54:58.724500 initrd-setup-root[987]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:54:58.744617 initrd-setup-root[994]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:54:58.750651 initrd-setup-root[1001]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:54:59.501763 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:54:59.512897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:54:59.518643 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:54:59.531299 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:54:59.538218 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.558948 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:54:59.572168 ignition[1074]: INFO : Ignition 2.19.0 Apr 24 23:54:59.572168 ignition[1074]: INFO : Stage: mount Apr 24 23:54:59.581661 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.581661 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.581661 ignition[1074]: INFO : mount: mount passed Apr 24 23:54:59.581661 ignition[1074]: INFO : Ignition finished successfully Apr 24 23:54:59.576589 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:54:59.596921 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:54:59.605844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:54:59.628756 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Apr 24 23:54:59.636028 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:54:59.636074 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:54:59.639237 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:54:59.646764 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:54:59.648110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:54:59.678728 ignition[1101]: INFO : Ignition 2.19.0 Apr 24 23:54:59.678728 ignition[1101]: INFO : Stage: files Apr 24 23:54:59.685148 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:54:59.685148 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:54:59.693860 ignition[1101]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:54:59.701035 ignition[1101]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:54:59.701035 ignition[1101]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:54:59.788458 ignition[1101]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:54:59.793302 ignition[1101]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:54:59.793302 ignition[1101]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:54:59.789487 unknown[1101]: wrote ssh authorized keys file for user: core Apr 24 23:54:59.846913 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:54:59.852268 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 24 23:54:59.852268 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.852268 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:54:59.912850 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:54:59.987890 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:54:59.993958 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:55:00.285235 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 23:55:00.661753 ignition[1101]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:55:00.661753 ignition[1101]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 24 23:55:00.690176 ignition[1101]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:55:00.697219 ignition[1101]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 24 23:55:00.697219 ignition[1101]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 24 23:55:00.697219 ignition[1101]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 24 23:55:00.713619 ignition[1101]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.733362 ignition[1101]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:55:00.737382 ignition[1101]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.737382 ignition[1101]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:55:00.737382 ignition[1101]: INFO : files: files passed Apr 24 23:55:00.737382 ignition[1101]: INFO : Ignition finished successfully Apr 24 23:55:00.749418 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:55:00.764933 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:55:00.772464 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:55:00.780721 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:55:00.780886 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:55:00.799357 initrd-setup-root-after-ignition[1130]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.799357 initrd-setup-root-after-ignition[1130]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.809933 initrd-setup-root-after-ignition[1134]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:55:00.815977 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:00.821769 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.837956 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:55:00.868153 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:55:00.868282 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:55:00.875002 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:55:00.885181 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:55:00.888132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:55:00.902879 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:55:00.916083 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.929882 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:55:00.943275 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:55:00.943554 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:55:00.944081 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:55:00.944672 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:55:00.944788 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:55:00.945646 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:55:00.946661 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:55:00.947130 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:55:00.947753 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:55:00.948223 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:55:00.948707 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:55:00.949192 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:55:00.950157 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:55:00.950638 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:55:00.951117 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:55:00.951542 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:55:00.951679 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:55:00.952548 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:55:00.953044 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:55:00.953463 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:55:00.997752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:55:01.001561 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:55:01.001729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:55:01.066332 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:55:01.069785 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:55:01.077348 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:55:01.077492 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:55:01.085619 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:55:01.088841 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:55:01.100953 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:55:01.105029 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:55:01.111894 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:55:01.114460 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:55:01.118717 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:55:01.118893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:55:01.136725 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:55:01.139862 ignition[1154]: INFO : Ignition 2.19.0 Apr 24 23:55:01.139862 ignition[1154]: INFO : Stage: umount Apr 24 23:55:01.156426 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:55:01.156426 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:55:01.156426 ignition[1154]: INFO : umount: umount passed Apr 24 23:55:01.156426 ignition[1154]: INFO : Ignition finished successfully Apr 24 23:55:01.139869 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:55:01.146364 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:55:01.146447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:55:01.150126 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:55:01.150235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:55:01.156458 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:55:01.159423 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:55:01.186666 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:55:01.186762 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:55:01.190001 systemd[1]: Stopped target network.target - Network. Apr 24 23:55:01.190090 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:55:01.190147 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:55:01.190595 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:55:01.191056 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:55:01.200145 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:55:01.206319 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:55:01.210878 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:55:01.213417 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:55:01.213461 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:55:01.213563 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:55:01.213608 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:55:01.214167 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:55:01.214212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:55:01.214613 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:55:01.214648 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:55:01.217450 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:55:01.217872 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:55:01.219950 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:55:01.220516 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:55:01.220620 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:55:01.248169 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:55:01.248282 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:55:01.250852 systemd-networkd[911]: eth0: DHCPv6 lease lost Apr 24 23:55:01.256145 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:55:01.256316 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:55:01.261674 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:55:01.265767 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:55:01.282933 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:55:01.291213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:55:01.291313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:55:01.297607 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:55:01.300929 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:55:01.301030 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:55:01.319087 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:55:01.319165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:55:01.324201 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:55:01.324258 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:55:01.364237 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: Data path switched from VF: enP3748s1 Apr 24 23:55:01.364271 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:55:01.364346 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:55:01.376357 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:55:01.378914 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:55:01.386673 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:55:01.386768 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:55:01.393684 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:55:01.393736 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:55:01.399826 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:55:01.399893 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:55:01.413904 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:55:01.413974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:55:01.422121 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:55:01.422189 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:55:01.437874 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:55:01.441812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:55:01.441869 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:55:01.450264 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:55:01.450306 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:55:01.453820 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:55:01.453870 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:55:01.460233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:01.460301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:01.463592 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:55:01.463699 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:55:01.469093 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:55:01.469182 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:55:01.476561 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:55:01.494511 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:55:01.616815 systemd[1]: Switching root. Apr 24 23:55:01.650348 systemd-journald[177]: Journal stopped Apr 24 23:55:07.726502 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 24 23:55:07.726537 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:55:07.726558 kernel: SELinux: policy capability open_perms=1 Apr 24 23:55:07.726572 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:55:07.726581 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:55:07.726589 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:55:07.729520 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:55:07.729534 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:55:07.729548 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:55:07.729556 kernel: audit: type=1403 audit(1777074904.181:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:55:07.729567 systemd[1]: Successfully loaded SELinux policy in 129.446ms. Apr 24 23:55:07.729577 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.897ms. Apr 24 23:55:07.729588 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:55:07.729597 systemd[1]: Detected virtualization microsoft. Apr 24 23:55:07.729610 systemd[1]: Detected architecture x86-64. Apr 24 23:55:07.729619 systemd[1]: Detected first boot. Apr 24 23:55:07.729629 systemd[1]: Hostname set to . Apr 24 23:55:07.729639 systemd[1]: Initializing machine ID from random generator. Apr 24 23:55:07.729648 zram_generator::config[1213]: No configuration found. Apr 24 23:55:07.729660 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:55:07.729670 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:55:07.729680 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 23:55:07.729690 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:55:07.729700 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:55:07.729709 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:55:07.729719 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:55:07.729731 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:55:07.729831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:55:07.729851 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:55:07.729869 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:55:07.729889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:55:07.729903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:55:07.729919 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:55:07.729935 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:55:07.729957 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:55:07.729976 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:55:07.729992 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:55:07.730008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:55:07.730025 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:55:07.730042 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:55:07.730064 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:55:07.730082 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:55:07.730099 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:55:07.730120 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:55:07.730137 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:55:07.730155 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:55:07.730173 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:55:07.730189 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:55:07.730202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:55:07.730218 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:55:07.730235 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:55:07.730251 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:55:07.730261 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:55:07.730275 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:55:07.730287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:07.730304 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:55:07.730315 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:55:07.730329 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:55:07.730340 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:55:07.730355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:55:07.730365 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:55:07.730378 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:55:07.730390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:55:07.730407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:55:07.730418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:55:07.730429 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:55:07.730441 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:55:07.730454 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:55:07.730470 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 24 23:55:07.730482 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 24 23:55:07.730495 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:55:07.730509 kernel: fuse: init (API version 7.39) Apr 24 23:55:07.730522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:55:07.730533 kernel: loop: module loaded Apr 24 23:55:07.730545 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:55:07.730557 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:55:07.730569 kernel: ACPI: bus type drm_connector registered Apr 24 23:55:07.730581 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:55:07.730592 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:07.730609 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:55:07.730622 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:55:07.730634 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:55:07.730644 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:55:07.730658 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:55:07.730693 systemd-journald[1335]: Collecting audit messages is disabled. Apr 24 23:55:07.730723 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:55:07.730734 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:55:07.730905 systemd-journald[1335]: Journal started Apr 24 23:55:07.730935 systemd-journald[1335]: Runtime Journal (/run/log/journal/3e500b2cc705497b9c14bf18fa09c98b) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:55:07.744139 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:55:07.745308 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:55:07.749186 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:55:07.749325 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:55:07.753072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:55:07.753273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:55:07.756823 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:55:07.757050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:55:07.762200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:55:07.762431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:55:07.766594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:55:07.766830 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:55:07.773927 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:55:07.774261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:55:07.779634 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:55:07.785935 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:55:07.790837 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:55:07.813187 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:55:07.822846 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:55:07.832818 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:55:07.836566 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:55:07.854951 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:55:07.859492 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:55:07.863337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:55:07.865893 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:55:07.869316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:55:07.871136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:55:07.876893 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:55:07.895604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:55:07.899322 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:55:07.902979 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:55:07.907558 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:55:07.916647 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:55:07.923330 systemd-journald[1335]: Time spent on flushing to /var/log/journal/3e500b2cc705497b9c14bf18fa09c98b is 72.026ms for 948 entries. Apr 24 23:55:07.923330 systemd-journald[1335]: System Journal (/var/log/journal/3e500b2cc705497b9c14bf18fa09c98b) is 11.8M, max 2.6G, 2.6G free. Apr 24 23:55:08.044403 systemd-journald[1335]: Received client request to flush runtime journal. Apr 24 23:55:08.044472 systemd-journald[1335]: /var/log/journal/3e500b2cc705497b9c14bf18fa09c98b/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Apr 24 23:55:08.044515 systemd-journald[1335]: Rotating system journal. Apr 24 23:55:07.930968 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:55:07.953566 udevadm[1381]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:55:08.016335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:55:08.039201 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Apr 24 23:55:08.039223 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Apr 24 23:55:08.048607 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:55:08.057604 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:55:08.075912 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:55:08.174789 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:55:08.184127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:55:08.199362 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Apr 24 23:55:08.199388 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Apr 24 23:55:08.205126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:55:08.733554 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:55:08.741969 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:55:08.766416 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Apr 24 23:55:09.015050 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:55:09.042164 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:55:09.099371 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 24 23:55:09.142936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:55:09.186840 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:55:09.217964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#22 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:55:09.231477 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:55:09.256868 kernel: hv_vmbus: registering driver hv_balloon Apr 24 23:55:09.271919 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 24 23:55:09.296601 kernel: hv_vmbus: registering driver hyperv_fb Apr 24 23:55:09.305535 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 24 23:55:09.305608 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 24 23:55:09.309184 kernel: Console: switching to colour dummy device 80x25 Apr 24 23:55:09.316192 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:55:09.348116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:55:09.372792 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1423) Apr 24 23:55:09.404124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:09.408201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:09.446996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:55:09.487006 systemd-networkd[1416]: lo: Link UP Apr 24 23:55:09.487014 systemd-networkd[1416]: lo: Gained carrier Apr 24 23:55:09.495905 systemd-networkd[1416]: Enumeration completed Apr 24 23:55:09.496734 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:55:09.496892 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:55:09.503775 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:55:09.519302 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:55:09.569809 kernel: mlx5_core 0ea4:00:02.0 enP3748s1: Link up Apr 24 23:55:09.599078 kernel: hv_netvsc 7ced8d4a-7996-7ced-8d4a-79967ced8d4a eth0: Data path switched to VF: enP3748s1 Apr 24 23:55:09.593588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:55:09.606214 systemd-networkd[1416]: enP3748s1: Link UP Apr 24 23:55:09.610955 systemd-networkd[1416]: eth0: Link UP Apr 24 23:55:09.610965 systemd-networkd[1416]: eth0: Gained carrier Apr 24 23:55:09.610989 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:55:09.637135 systemd-networkd[1416]: enP3748s1: Gained carrier Apr 24 23:55:09.668176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:55:09.668475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:09.672934 systemd-networkd[1416]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:55:09.678890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:55:09.726814 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 24 23:55:09.879118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:55:09.970467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:55:09.981002 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:55:10.028835 lvm[1501]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:55:10.061927 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:55:10.066244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:55:10.075936 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:55:10.080917 lvm[1504]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:55:10.106091 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:55:10.110087 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:55:10.114319 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:55:10.114355 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:55:10.117537 systemd[1]: Reached target machines.target - Containers. Apr 24 23:55:10.121287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:55:10.140134 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:55:10.144865 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:55:10.147975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:55:10.149902 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:55:10.157939 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:55:10.166357 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:55:10.170931 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:55:10.201495 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:55:10.251767 kernel: loop0: detected capacity change from 0 to 140768 Apr 24 23:55:10.252085 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:55:10.254930 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:55:10.652760 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:55:10.673838 systemd-networkd[1416]: eth0: Gained IPv6LL Apr 24 23:55:10.680013 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:55:10.695760 kernel: loop1: detected capacity change from 0 to 31056 Apr 24 23:55:11.046766 kernel: loop2: detected capacity change from 0 to 142488 Apr 24 23:55:11.426767 kernel: loop3: detected capacity change from 0 to 228704 Apr 24 23:55:11.462766 kernel: loop4: detected capacity change from 0 to 140768 Apr 24 23:55:11.480808 kernel: loop5: detected capacity change from 0 to 31056 Apr 24 23:55:11.491780 kernel: loop6: detected capacity change from 0 to 142488 Apr 24 23:55:11.508761 kernel: loop7: detected capacity change from 0 to 228704 Apr 24 23:55:11.522142 (sd-merge)[1527]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 24 23:55:11.522726 (sd-merge)[1527]: Merged extensions into '/usr'. Apr 24 23:55:11.526421 systemd[1]: Reloading requested from client PID 1512 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:55:11.526439 systemd[1]: Reloading... Apr 24 23:55:11.582774 zram_generator::config[1553]: No configuration found. Apr 24 23:55:11.742627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:55:11.830588 systemd[1]: Reloading finished in 303 ms. Apr 24 23:55:11.844557 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:55:11.856911 systemd[1]: Starting ensure-sysext.service... Apr 24 23:55:11.863932 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:55:11.872033 systemd[1]: Reloading requested from client PID 1618 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:55:11.872184 systemd[1]: Reloading... Apr 24 23:55:11.888269 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:55:11.888797 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:55:11.890104 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:55:11.890551 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Apr 24 23:55:11.890642 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Apr 24 23:55:11.909797 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:55:11.909809 systemd-tmpfiles[1619]: Skipping /boot Apr 24 23:55:11.929687 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:55:11.929700 systemd-tmpfiles[1619]: Skipping /boot Apr 24 23:55:11.993768 zram_generator::config[1653]: No configuration found. Apr 24 23:55:12.147582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:55:12.228913 systemd[1]: Reloading finished in 356 ms. Apr 24 23:55:12.245239 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:55:12.269991 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:55:12.291003 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:55:12.305014 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:55:12.310674 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:55:12.316639 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:55:12.328316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:12.328633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:55:12.332058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:55:12.339022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:55:12.366041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:55:12.374756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:55:12.374945 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:12.376753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:55:12.377841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:55:12.383034 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:55:12.383242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:55:12.392189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:55:12.395946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:55:12.420802 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:12.421217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:55:12.427998 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:55:12.434682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:55:12.447998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:55:12.465114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:55:12.473768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:55:12.474042 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:55:12.477818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:55:12.484499 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:55:12.489549 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:55:12.494534 systemd-resolved[1729]: Positive Trust Anchors: Apr 24 23:55:12.494548 systemd-resolved[1729]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:55:12.494600 systemd-resolved[1729]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:55:12.496094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:55:12.496277 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:55:12.498974 augenrules[1759]: No rules Apr 24 23:55:12.500430 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:55:12.504539 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:55:12.505053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:55:12.509055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:55:12.509258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:55:12.514123 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:55:12.514302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:55:12.521254 systemd[1]: Finished ensure-sysext.service. Apr 24 23:55:12.530555 systemd-resolved[1729]: Using system hostname 'ci-4081.3.6-n-bfbb2fd0ff'. Apr 24 23:55:12.537358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:55:12.543223 systemd[1]: Reached target network.target - Network. Apr 24 23:55:12.545825 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:55:12.549219 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:55:12.552725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:55:12.552808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:55:12.902775 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:55:12.907002 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:55:15.166818 ldconfig[1508]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:55:15.179389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:55:15.186145 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:55:15.211961 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:55:15.215574 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:55:15.218943 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:55:15.224662 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:55:15.228799 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:55:15.231971 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:55:15.235630 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:55:15.239471 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:55:15.239529 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:55:15.242259 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:55:15.246212 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:55:15.251034 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:55:15.255361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:55:15.261473 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:55:15.265191 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:55:15.268247 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:55:15.271377 systemd[1]: System is tainted: cgroupsv1 Apr 24 23:55:15.271435 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:55:15.271469 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:55:15.274509 systemd[1]: Starting chronyd.service - NTP client/server... Apr 24 23:55:15.280849 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:55:15.288904 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:55:15.296361 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:55:15.301082 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:55:15.314895 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:55:15.318394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:55:15.318568 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 24 23:55:15.323896 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 24 23:55:15.327919 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 24 23:55:15.340762 jq[1790]: false Apr 24 23:55:15.338883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:15.342993 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:55:15.359903 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:55:15.373717 KVP[1794]: KVP starting; pid is:1794 Apr 24 23:55:15.373873 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:55:15.391273 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:55:15.399927 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:55:15.400213 (chronyd)[1785]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 24 23:55:15.407213 extend-filesystems[1793]: Found loop4 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found loop5 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found loop6 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found loop7 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda1 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda2 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda3 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found usr Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda4 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda6 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda7 Apr 24 23:55:15.414779 extend-filesystems[1793]: Found sda9 Apr 24 23:55:15.414779 extend-filesystems[1793]: Checking size of /dev/sda9 Apr 24 23:55:15.497155 kernel: hv_utils: KVP IC version 4.0 Apr 24 23:55:15.420998 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:55:15.459467 KVP[1794]: KVP LIC Version: 3.1 Apr 24 23:55:15.435153 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:55:15.463595 chronyd[1818]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 24 23:55:15.459599 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:55:15.484071 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:55:15.506227 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:55:15.506542 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:55:15.518008 chronyd[1818]: Timezone right/UTC failed leap second check, ignoring Apr 24 23:55:15.518499 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:55:15.527375 jq[1825]: true Apr 24 23:55:15.518278 chronyd[1818]: Loaded seccomp filter (level 2) Apr 24 23:55:15.522022 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:55:15.539216 extend-filesystems[1793]: Old size kept for /dev/sda9 Apr 24 23:55:15.539216 extend-filesystems[1793]: Found sr0 Apr 24 23:55:15.529635 dbus-daemon[1789]: [system] SELinux support is enabled Apr 24 23:55:15.538928 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:55:15.556516 update_engine[1819]: I20260424 23:55:15.549698 1819 main.cc:92] Flatcar Update Engine starting Apr 24 23:55:15.556516 update_engine[1819]: I20260424 23:55:15.554831 1819 update_check_scheduler.cc:74] Next update check in 6m25s Apr 24 23:55:15.556933 systemd[1]: Started chronyd.service - NTP client/server. Apr 24 23:55:15.565598 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:55:15.566510 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:55:15.573946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:55:15.580244 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:55:15.580613 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:55:15.630370 (ntainerd)[1846]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:55:15.639299 systemd-logind[1810]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:55:15.640811 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:55:15.640841 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:55:15.648766 systemd-logind[1810]: New seat seat0. Apr 24 23:55:15.656943 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:55:15.656973 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:55:15.664162 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:55:15.673486 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:55:15.678441 jq[1845]: true Apr 24 23:55:15.701218 tar[1842]: linux-amd64/LICENSE Apr 24 23:55:15.701218 tar[1842]: linux-amd64/helm Apr 24 23:55:15.705126 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:55:15.707859 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:55:15.750018 coreos-metadata[1787]: Apr 24 23:55:15.749 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:55:15.753213 coreos-metadata[1787]: Apr 24 23:55:15.752 INFO Fetch successful Apr 24 23:55:15.753213 coreos-metadata[1787]: Apr 24 23:55:15.752 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 24 23:55:15.762762 coreos-metadata[1787]: Apr 24 23:55:15.759 INFO Fetch successful Apr 24 23:55:15.762762 coreos-metadata[1787]: Apr 24 23:55:15.759 INFO Fetching http://168.63.129.16/machine/49789a93-a75a-46a6-a468-78910fd285d8/4aec5b2b%2D6547%2D46cc%2Db81e%2Dc0f03dca53bd.%5Fci%2D4081.3.6%2Dn%2Dbfbb2fd0ff?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 24 23:55:15.762762 coreos-metadata[1787]: Apr 24 23:55:15.761 INFO Fetch successful Apr 24 23:55:15.762762 coreos-metadata[1787]: Apr 24 23:55:15.761 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:55:15.778658 coreos-metadata[1787]: Apr 24 23:55:15.773 INFO Fetch successful Apr 24 23:55:15.778782 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1850) Apr 24 23:55:15.834708 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:55:15.845722 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:55:15.920420 bash[1903]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:55:15.933659 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:55:15.948701 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:55:16.038851 locksmithd[1877]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:55:16.647843 tar[1842]: linux-amd64/README.md Apr 24 23:55:16.670970 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:55:16.719699 containerd[1846]: time="2026-04-24T23:55:16.717959700Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:55:16.795573 containerd[1846]: time="2026-04-24T23:55:16.795502000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:16.798538 containerd[1846]: time="2026-04-24T23:55:16.798499100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:16.798637 containerd[1846]: time="2026-04-24T23:55:16.798623700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:55:16.798711 containerd[1846]: time="2026-04-24T23:55:16.798699300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:55:16.798967 containerd[1846]: time="2026-04-24T23:55:16.798949000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:55:16.799039 containerd[1846]: time="2026-04-24T23:55:16.799027100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:16.799158 containerd[1846]: time="2026-04-24T23:55:16.799141900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:16.799216 containerd[1846]: time="2026-04-24T23:55:16.799204600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.799535000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.799558600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.799577100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.799591000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.799685600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.799963600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.800166100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.800186200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.800278500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:55:16.800364 containerd[1846]: time="2026-04-24T23:55:16.800332500Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:55:16.833422 containerd[1846]: time="2026-04-24T23:55:16.833361600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:55:16.833543 containerd[1846]: time="2026-04-24T23:55:16.833438900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:55:16.833543 containerd[1846]: time="2026-04-24T23:55:16.833460100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:55:16.833543 containerd[1846]: time="2026-04-24T23:55:16.833480300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:55:16.833543 containerd[1846]: time="2026-04-24T23:55:16.833496900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:55:16.833713 containerd[1846]: time="2026-04-24T23:55:16.833685600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:55:16.835351 containerd[1846]: time="2026-04-24T23:55:16.835321000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835575200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835601000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835619700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835641200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835659700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835677200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835695900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835726400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835757300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835775300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835792200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835820800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835838000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.836761 containerd[1846]: time="2026-04-24T23:55:16.835855300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.835875200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.835891900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.835913100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.835930200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.835947900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.835965600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.835986200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.836007100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.836024500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.836041600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.836064200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.836093200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.836111000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837260 containerd[1846]: time="2026-04-24T23:55:16.836126200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836183300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836206900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836222400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836238700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836252400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836268000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836283900Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:55:16.837791 containerd[1846]: time="2026-04-24T23:55:16.836306400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:55:16.838910 containerd[1846]: time="2026-04-24T23:55:16.836692300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:55:16.838910 containerd[1846]: time="2026-04-24T23:55:16.838206300Z" level=info msg="Connect containerd service" Apr 24 23:55:16.838910 containerd[1846]: time="2026-04-24T23:55:16.838259100Z" level=info msg="using legacy CRI server" Apr 24 23:55:16.838910 containerd[1846]: time="2026-04-24T23:55:16.838269000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:55:16.838910 containerd[1846]: time="2026-04-24T23:55:16.838390900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.839604300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.839855000Z" level=info msg="Start subscribing containerd event" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.839927200Z" level=info msg="Start recovering state" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.839983700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.839998700Z" level=info msg="Start event monitor" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.840020000Z" level=info msg="Start snapshots syncer" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.840034000Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.840045400Z" level=info msg="Start streaming server" Apr 24 23:55:16.844567 containerd[1846]: time="2026-04-24T23:55:16.840045500Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:55:16.840263 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:55:16.854351 containerd[1846]: time="2026-04-24T23:55:16.853024900Z" level=info msg="containerd successfully booted in 0.136342s" Apr 24 23:55:16.977327 sshd_keygen[1838]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:55:17.006705 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:55:17.019227 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:55:17.024082 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 24 23:55:17.038090 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:55:17.038427 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:55:17.057063 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:55:17.073923 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 24 23:55:17.112384 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:55:17.125065 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:55:17.129431 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:55:17.133431 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:55:17.277948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:17.282414 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:55:17.284588 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:17.286909 systemd[1]: Startup finished in 12.446s (kernel) + 13.233s (userspace) = 25.680s. Apr 24 23:55:17.610357 login[1969]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 24 23:55:17.622573 login[1968]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 24 23:55:17.639988 systemd-logind[1810]: New session 2 of user core. Apr 24 23:55:17.642220 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:55:17.649307 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:55:17.671096 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:55:17.680169 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:55:17.691440 (systemd)[1991]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:55:17.898560 systemd[1991]: Queued start job for default target default.target. Apr 24 23:55:17.899058 systemd[1991]: Created slice app.slice - User Application Slice. Apr 24 23:55:17.899088 systemd[1991]: Reached target paths.target - Paths. Apr 24 23:55:17.899105 systemd[1991]: Reached target timers.target - Timers. Apr 24 23:55:17.904515 systemd[1991]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:55:17.917166 systemd[1991]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:55:17.917251 systemd[1991]: Reached target sockets.target - Sockets. Apr 24 23:55:17.917270 systemd[1991]: Reached target basic.target - Basic System. Apr 24 23:55:17.917385 systemd[1991]: Reached target default.target - Main User Target. Apr 24 23:55:17.917423 systemd[1991]: Startup finished in 214ms. Apr 24 23:55:17.918914 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:55:17.924059 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:55:17.999156 kubelet[1978]: E0424 23:55:17.999124 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:18.003696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:18.004024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:18.594557 waagent[1965]: 2026-04-24T23:55:18.594440Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.594983Z INFO Daemon Daemon OS: flatcar 4081.3.6 Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.596031Z INFO Daemon Daemon Python: 3.11.9 Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.597240Z INFO Daemon Daemon Run daemon Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.598125Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.598998Z INFO Daemon Daemon Using waagent for provisioning Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.599709Z INFO Daemon Daemon Activate resource disk Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.600079Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.604445Z INFO Daemon Daemon Found device: None Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.605316Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.606437Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.608764Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 24 23:55:18.635297 waagent[1965]: 2026-04-24T23:55:18.609500Z INFO Daemon Daemon Running default provisioning handler Apr 24 23:55:18.614522 login[1969]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 24 23:55:18.639482 waagent[1965]: 2026-04-24T23:55:18.639418Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 24 23:55:18.642492 systemd-logind[1810]: New session 1 of user core. Apr 24 23:55:18.648034 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:55:18.648448 waagent[1965]: 2026-04-24T23:55:18.647509Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 24 23:55:18.663077 waagent[1965]: 2026-04-24T23:55:18.663008Z INFO Daemon Daemon cloud-init is enabled: False Apr 24 23:55:18.666763 waagent[1965]: 2026-04-24T23:55:18.665981Z INFO Daemon Daemon Copying ovf-env.xml Apr 24 23:55:18.749104 waagent[1965]: 2026-04-24T23:55:18.748994Z INFO Daemon Daemon Successfully mounted dvd Apr 24 23:55:18.763435 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 24 23:55:18.766275 waagent[1965]: 2026-04-24T23:55:18.766199Z INFO Daemon Daemon Detect protocol endpoint Apr 24 23:55:18.782327 waagent[1965]: 2026-04-24T23:55:18.766558Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 24 23:55:18.782327 waagent[1965]: 2026-04-24T23:55:18.767536Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 24 23:55:18.782327 waagent[1965]: 2026-04-24T23:55:18.768442Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 24 23:55:18.782327 waagent[1965]: 2026-04-24T23:55:18.769054Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 24 23:55:18.782327 waagent[1965]: 2026-04-24T23:55:18.769880Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 24 23:55:18.792128 waagent[1965]: 2026-04-24T23:55:18.792079Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 24 23:55:18.801195 waagent[1965]: 2026-04-24T23:55:18.792598Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 24 23:55:18.801195 waagent[1965]: 2026-04-24T23:55:18.793571Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 24 23:55:18.922363 waagent[1965]: 2026-04-24T23:55:18.922205Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 24 23:55:18.926170 waagent[1965]: 2026-04-24T23:55:18.926105Z INFO Daemon Daemon Forcing an update of the goal state. Apr 24 23:55:18.931948 waagent[1965]: 2026-04-24T23:55:18.931898Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 24 23:55:18.948075 waagent[1965]: 2026-04-24T23:55:18.948024Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 24 23:55:18.965005 waagent[1965]: 2026-04-24T23:55:18.948606Z INFO Daemon Apr 24 23:55:18.965005 waagent[1965]: 2026-04-24T23:55:18.949315Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ef7fc53d-67e0-4b53-b562-58f287eac2a1 eTag: 6420472827394792529 source: Fabric] Apr 24 23:55:18.965005 waagent[1965]: 2026-04-24T23:55:18.950505Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 24 23:55:18.965005 waagent[1965]: 2026-04-24T23:55:18.951245Z INFO Daemon Apr 24 23:55:18.965005 waagent[1965]: 2026-04-24T23:55:18.951720Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 24 23:55:18.967405 waagent[1965]: 2026-04-24T23:55:18.967362Z INFO Daemon Daemon Downloading artifacts profile blob Apr 24 23:55:19.079981 waagent[1965]: 2026-04-24T23:55:19.079898Z INFO Daemon Downloaded certificate {'thumbprint': '891701EF834A9E6AA7196CFE51638035D7AC1613', 'hasPrivateKey': True} Apr 24 23:55:19.085616 waagent[1965]: 2026-04-24T23:55:19.085547Z INFO Daemon Fetch goal state completed Apr 24 23:55:19.120061 waagent[1965]: 2026-04-24T23:55:19.120007Z INFO Daemon Daemon Starting provisioning Apr 24 23:55:19.127469 waagent[1965]: 2026-04-24T23:55:19.120264Z INFO Daemon Daemon Handle ovf-env.xml. Apr 24 23:55:19.127469 waagent[1965]: 2026-04-24T23:55:19.121395Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-bfbb2fd0ff] Apr 24 23:55:19.129225 waagent[1965]: 2026-04-24T23:55:19.129175Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-bfbb2fd0ff] Apr 24 23:55:19.138363 waagent[1965]: 2026-04-24T23:55:19.129519Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 24 23:55:19.138363 waagent[1965]: 2026-04-24T23:55:19.130694Z INFO Daemon Daemon Primary interface is [eth0] Apr 24 23:55:19.155708 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:55:19.155717 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:55:19.155806 systemd-networkd[1416]: eth0: DHCP lease lost Apr 24 23:55:19.157224 waagent[1965]: 2026-04-24T23:55:19.157134Z INFO Daemon Daemon Create user account if not exists Apr 24 23:55:19.160035 waagent[1965]: 2026-04-24T23:55:19.157602Z INFO Daemon Daemon User core already exists, skip useradd Apr 24 23:55:19.160035 waagent[1965]: 2026-04-24T23:55:19.158694Z INFO Daemon Daemon Configure sudoer Apr 24 23:55:19.160035 waagent[1965]: 2026-04-24T23:55:19.159428Z INFO Daemon Daemon Configure sshd Apr 24 23:55:19.160391 waagent[1965]: 2026-04-24T23:55:19.160347Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 24 23:55:19.161376 waagent[1965]: 2026-04-24T23:55:19.161337Z INFO Daemon Daemon Deploy ssh public key. Apr 24 23:55:19.175878 systemd-networkd[1416]: eth0: DHCPv6 lease lost Apr 24 23:55:19.218790 systemd-networkd[1416]: eth0: DHCPv4 address 10.0.0.31/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:55:28.252954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:55:28.258306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:28.386910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:28.387160 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:29.085882 kubelet[2059]: E0424 23:55:29.085802 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:29.089600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:29.089878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:39.314961 chronyd[1818]: Selected source PHC0 Apr 24 23:55:39.329342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:55:39.334965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:39.450903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:39.453923 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:40.141652 kubelet[2079]: E0424 23:55:40.141596 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:40.144985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:40.145243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:49.224973 waagent[1965]: 2026-04-24T23:55:49.224911Z INFO Daemon Daemon Provisioning complete Apr 24 23:55:49.237950 waagent[1965]: 2026-04-24T23:55:49.237889Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 24 23:55:49.242118 waagent[1965]: 2026-04-24T23:55:49.240133Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 24 23:55:49.248826 waagent[1965]: 2026-04-24T23:55:49.245107Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 24 23:55:49.368400 waagent[2087]: 2026-04-24T23:55:49.368311Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 24 23:55:49.368874 waagent[2087]: 2026-04-24T23:55:49.368474Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Apr 24 23:55:49.368874 waagent[2087]: 2026-04-24T23:55:49.368557Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 24 23:55:49.409023 waagent[2087]: 2026-04-24T23:55:49.408931Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 24 23:55:49.409254 waagent[2087]: 2026-04-24T23:55:49.409201Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:55:49.409358 waagent[2087]: 2026-04-24T23:55:49.409311Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:55:49.416278 waagent[2087]: 2026-04-24T23:55:49.416208Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 24 23:55:49.425759 waagent[2087]: 2026-04-24T23:55:49.425699Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 24 23:55:49.426246 waagent[2087]: 2026-04-24T23:55:49.426191Z INFO ExtHandler Apr 24 23:55:49.426330 waagent[2087]: 2026-04-24T23:55:49.426285Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0f9236fd-f51b-45de-9dcb-069df915644d eTag: 6420472827394792529 source: Fabric] Apr 24 23:55:49.426637 waagent[2087]: 2026-04-24T23:55:49.426587Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 24 23:55:49.427237 waagent[2087]: 2026-04-24T23:55:49.427182Z INFO ExtHandler Apr 24 23:55:49.427307 waagent[2087]: 2026-04-24T23:55:49.427269Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 24 23:55:49.430236 waagent[2087]: 2026-04-24T23:55:49.430191Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 24 23:55:49.487173 waagent[2087]: 2026-04-24T23:55:49.487033Z INFO ExtHandler Downloaded certificate {'thumbprint': '891701EF834A9E6AA7196CFE51638035D7AC1613', 'hasPrivateKey': True} Apr 24 23:55:49.487644 waagent[2087]: 2026-04-24T23:55:49.487585Z INFO ExtHandler Fetch goal state completed Apr 24 23:55:49.501473 waagent[2087]: 2026-04-24T23:55:49.501412Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2087 Apr 24 23:55:49.501629 waagent[2087]: 2026-04-24T23:55:49.501580Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 24 23:55:49.503174 waagent[2087]: 2026-04-24T23:55:49.503118Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Apr 24 23:55:49.503534 waagent[2087]: 2026-04-24T23:55:49.503485Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 24 23:55:49.537459 waagent[2087]: 2026-04-24T23:55:49.537416Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 24 23:55:49.537664 waagent[2087]: 2026-04-24T23:55:49.537620Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 24 23:55:49.544248 waagent[2087]: 2026-04-24T23:55:49.544208Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 24 23:55:49.551069 systemd[1]: Reloading requested from client PID 2100 ('systemctl') (unit waagent.service)... Apr 24 23:55:49.551088 systemd[1]: Reloading... Apr 24 23:55:49.627833 zram_generator::config[2134]: No configuration found. Apr 24 23:55:49.771317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:55:49.851544 systemd[1]: Reloading finished in 299 ms. Apr 24 23:55:49.878880 waagent[2087]: 2026-04-24T23:55:49.878221Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 24 23:55:49.886191 systemd[1]: Reloading requested from client PID 2196 ('systemctl') (unit waagent.service)... Apr 24 23:55:49.886209 systemd[1]: Reloading... Apr 24 23:55:49.978801 zram_generator::config[2230]: No configuration found. Apr 24 23:55:50.100645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:55:50.180631 systemd[1]: Reloading finished in 294 ms. Apr 24 23:55:50.204341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:55:50.206933 waagent[2087]: 2026-04-24T23:55:50.204404Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 24 23:55:50.206933 waagent[2087]: 2026-04-24T23:55:50.204595Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 24 23:55:50.211196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:55:51.012450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:55:51.024170 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:55:51.061293 kubelet[2310]: E0424 23:55:51.061211 2310 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:55:51.063916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:55:51.064256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:55:51.253843 waagent[2087]: 2026-04-24T23:55:51.253721Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 24 23:55:51.254555 waagent[2087]: 2026-04-24T23:55:51.254491Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 24 23:55:51.255359 waagent[2087]: 2026-04-24T23:55:51.255281Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 24 23:55:51.255977 waagent[2087]: 2026-04-24T23:55:51.255909Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 24 23:55:51.256083 waagent[2087]: 2026-04-24T23:55:51.255987Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:55:51.256375 waagent[2087]: 2026-04-24T23:55:51.256299Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:55:51.256446 waagent[2087]: 2026-04-24T23:55:51.256405Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:55:51.256721 waagent[2087]: 2026-04-24T23:55:51.256670Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 24 23:55:51.257087 waagent[2087]: 2026-04-24T23:55:51.257037Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 24 23:55:51.257276 waagent[2087]: 2026-04-24T23:55:51.257231Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 24 23:55:51.258018 waagent[2087]: 2026-04-24T23:55:51.257855Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:55:51.258018 waagent[2087]: 2026-04-24T23:55:51.257955Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 24 23:55:51.258226 waagent[2087]: 2026-04-24T23:55:51.258181Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 24 23:55:51.258226 waagent[2087]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 24 23:55:51.258226 waagent[2087]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 24 23:55:51.258226 waagent[2087]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 24 23:55:51.258226 waagent[2087]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:55:51.258226 waagent[2087]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:55:51.258226 waagent[2087]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:55:51.258716 waagent[2087]: 2026-04-24T23:55:51.258669Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 24 23:55:51.258925 waagent[2087]: 2026-04-24T23:55:51.258854Z INFO EnvHandler ExtHandler Configure routes Apr 24 23:55:51.259008 waagent[2087]: 2026-04-24T23:55:51.258959Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 24 23:55:51.260289 waagent[2087]: 2026-04-24T23:55:51.260225Z INFO EnvHandler ExtHandler Gateway:None Apr 24 23:55:51.260627 waagent[2087]: 2026-04-24T23:55:51.260568Z INFO EnvHandler ExtHandler Routes:None Apr 24 23:55:51.265830 waagent[2087]: 2026-04-24T23:55:51.265717Z INFO ExtHandler ExtHandler Apr 24 23:55:51.266953 waagent[2087]: 2026-04-24T23:55:51.266913Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 76a753da-297e-427e-95bf-ab6cbfdd29bb correlation 7a53601c-97ac-42bf-b131-4f9c1f2980fa created: 2026-04-24T23:54:25.536201Z] Apr 24 23:55:51.267310 waagent[2087]: 2026-04-24T23:55:51.267262Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 24 23:55:51.267846 waagent[2087]: 2026-04-24T23:55:51.267801Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Apr 24 23:55:51.301410 waagent[2087]: 2026-04-24T23:55:51.301311Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 83467F5B-B85C-4436-B7B4-705DBB981211;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 24 23:55:51.311570 waagent[2087]: 2026-04-24T23:55:51.311506Z INFO MonitorHandler ExtHandler Network interfaces: Apr 24 23:55:51.311570 waagent[2087]: Executing ['ip', '-a', '-o', 'link']: Apr 24 23:55:51.311570 waagent[2087]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 24 23:55:51.311570 waagent[2087]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:79:96 brd ff:ff:ff:ff:ff:ff Apr 24 23:55:51.311570 waagent[2087]: 3: enP3748s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:4a:79:96 brd ff:ff:ff:ff:ff:ff\ altname enP3748p0s2 Apr 24 23:55:51.311570 waagent[2087]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 24 23:55:51.311570 waagent[2087]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 24 23:55:51.311570 waagent[2087]: 2: eth0 inet 10.0.0.31/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 24 23:55:51.311570 waagent[2087]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 24 23:55:51.311570 waagent[2087]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 24 23:55:51.311570 waagent[2087]: 2: eth0 inet6 fe80::7eed:8dff:fe4a:7996/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 24 23:55:51.367008 waagent[2087]: 2026-04-24T23:55:51.366943Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 24 23:55:51.367008 waagent[2087]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:55:51.367008 waagent[2087]: pkts bytes target prot opt in out source destination Apr 24 23:55:51.367008 waagent[2087]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:55:51.367008 waagent[2087]: pkts bytes target prot opt in out source destination Apr 24 23:55:51.367008 waagent[2087]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:55:51.367008 waagent[2087]: pkts bytes target prot opt in out source destination Apr 24 23:55:51.367008 waagent[2087]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 24 23:55:51.367008 waagent[2087]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 24 23:55:51.367008 waagent[2087]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 24 23:55:51.370253 waagent[2087]: 2026-04-24T23:55:51.370198Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 24 23:55:51.370253 waagent[2087]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:55:51.370253 waagent[2087]: pkts bytes target prot opt in out source destination Apr 24 23:55:51.370253 waagent[2087]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:55:51.370253 waagent[2087]: pkts bytes target prot opt in out source destination Apr 24 23:55:51.370253 waagent[2087]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:55:51.370253 waagent[2087]: pkts bytes target prot opt in out source destination Apr 24 23:55:51.370253 waagent[2087]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 24 23:55:51.370253 waagent[2087]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 24 23:55:51.370253 waagent[2087]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 24 23:55:51.370607 waagent[2087]: 2026-04-24T23:55:51.370485Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 24 23:55:57.379574 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 24 23:56:00.641463 update_engine[1819]: I20260424 23:56:00.641378 1819 update_attempter.cc:509] Updating boot flags... Apr 24 23:56:00.708786 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2358) Apr 24 23:56:00.805775 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2358) Apr 24 23:56:01.079297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 24 23:56:01.090996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:01.417928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:01.421335 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:56:01.863840 kubelet[2424]: E0424 23:56:01.863736 2424 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:56:01.866465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:56:01.866836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:56:12.079370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 24 23:56:12.091589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:12.300883 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:56:12.307040 systemd[1]: Started sshd@0-10.0.0.31:22-4.175.71.9:37884.service - OpenSSH per-connection server daemon (4.175.71.9:37884). Apr 24 23:56:12.667497 sshd[2436]: Accepted publickey for core from 4.175.71.9 port 37884 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:12.668973 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:12.673639 systemd-logind[1810]: New session 3 of user core. Apr 24 23:56:12.684356 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:56:12.766925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:12.777183 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:56:12.805064 systemd[1]: Started sshd@1-10.0.0.31:22-4.175.71.9:37886.service - OpenSSH per-connection server daemon (4.175.71.9:37886). Apr 24 23:56:12.867528 kubelet[2448]: E0424 23:56:12.867074 2448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:56:12.870785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:56:12.871092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:56:12.934728 sshd[2455]: Accepted publickey for core from 4.175.71.9 port 37886 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:12.936308 sshd[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:12.941395 systemd-logind[1810]: New session 4 of user core. Apr 24 23:56:12.951102 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:56:13.047591 sshd[2455]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:13.050256 systemd[1]: sshd@1-10.0.0.31:22-4.175.71.9:37886.service: Deactivated successfully. Apr 24 23:56:13.054225 systemd-logind[1810]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:56:13.055158 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:56:13.056219 systemd-logind[1810]: Removed session 4. Apr 24 23:56:13.069165 systemd[1]: Started sshd@2-10.0.0.31:22-4.175.71.9:37890.service - OpenSSH per-connection server daemon (4.175.71.9:37890). Apr 24 23:56:13.174765 sshd[2465]: Accepted publickey for core from 4.175.71.9 port 37890 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:13.176225 sshd[2465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:13.181221 systemd-logind[1810]: New session 5 of user core. Apr 24 23:56:13.182993 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:56:13.277330 sshd[2465]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:13.283203 systemd[1]: sshd@2-10.0.0.31:22-4.175.71.9:37890.service: Deactivated successfully. Apr 24 23:56:13.288422 systemd-logind[1810]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:56:13.289820 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:56:13.290852 systemd-logind[1810]: Removed session 5. Apr 24 23:56:13.298293 systemd[1]: Started sshd@3-10.0.0.31:22-4.175.71.9:37892.service - OpenSSH per-connection server daemon (4.175.71.9:37892). Apr 24 23:56:13.409321 sshd[2473]: Accepted publickey for core from 4.175.71.9 port 37892 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:13.410393 sshd[2473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:13.414958 systemd-logind[1810]: New session 6 of user core. Apr 24 23:56:13.421984 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:56:13.518305 sshd[2473]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:13.521125 systemd[1]: sshd@3-10.0.0.31:22-4.175.71.9:37892.service: Deactivated successfully. Apr 24 23:56:13.525912 systemd-logind[1810]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:56:13.526316 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:56:13.527654 systemd-logind[1810]: Removed session 6. Apr 24 23:56:13.539031 systemd[1]: Started sshd@4-10.0.0.31:22-4.175.71.9:37894.service - OpenSSH per-connection server daemon (4.175.71.9:37894). Apr 24 23:56:13.645187 sshd[2481]: Accepted publickey for core from 4.175.71.9 port 37894 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:13.646799 sshd[2481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:13.650795 systemd-logind[1810]: New session 7 of user core. Apr 24 23:56:13.656017 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:56:13.835158 sudo[2485]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:56:13.835550 sudo[2485]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:13.860105 sudo[2485]: pam_unix(sudo:session): session closed for user root Apr 24 23:56:13.875258 sshd[2481]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:13.878309 systemd[1]: sshd@4-10.0.0.31:22-4.175.71.9:37894.service: Deactivated successfully. Apr 24 23:56:13.883215 systemd-logind[1810]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:56:13.884710 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:56:13.885716 systemd-logind[1810]: Removed session 7. Apr 24 23:56:13.896214 systemd[1]: Started sshd@5-10.0.0.31:22-4.175.71.9:37902.service - OpenSSH per-connection server daemon (4.175.71.9:37902). Apr 24 23:56:14.001964 sshd[2490]: Accepted publickey for core from 4.175.71.9 port 37902 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:14.003445 sshd[2490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:14.007982 systemd-logind[1810]: New session 8 of user core. Apr 24 23:56:14.018239 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:56:14.099444 sudo[2495]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:56:14.099864 sudo[2495]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:14.104417 sudo[2495]: pam_unix(sudo:session): session closed for user root Apr 24 23:56:14.109404 sudo[2494]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:56:14.109805 sudo[2494]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:14.124035 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:56:14.125935 auditctl[2498]: No rules Apr 24 23:56:14.127031 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:56:14.127390 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:56:14.130358 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:56:14.161001 augenrules[2517]: No rules Apr 24 23:56:14.162654 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:56:14.166960 sudo[2494]: pam_unix(sudo:session): session closed for user root Apr 24 23:56:14.182986 sshd[2490]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:14.188672 systemd[1]: sshd@5-10.0.0.31:22-4.175.71.9:37902.service: Deactivated successfully. Apr 24 23:56:14.192882 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:56:14.193699 systemd-logind[1810]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:56:14.194585 systemd-logind[1810]: Removed session 8. Apr 24 23:56:14.203987 systemd[1]: Started sshd@6-10.0.0.31:22-4.175.71.9:37908.service - OpenSSH per-connection server daemon (4.175.71.9:37908). Apr 24 23:56:14.309338 sshd[2526]: Accepted publickey for core from 4.175.71.9 port 37908 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:56:14.310792 sshd[2526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:56:14.315802 systemd-logind[1810]: New session 9 of user core. Apr 24 23:56:14.321648 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:56:14.403999 sudo[2530]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:56:14.404377 sudo[2530]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:56:15.360022 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:56:15.361275 (dockerd)[2546]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:56:16.416540 dockerd[2546]: time="2026-04-24T23:56:16.416481289Z" level=info msg="Starting up" Apr 24 23:56:16.787308 dockerd[2546]: time="2026-04-24T23:56:16.786921530Z" level=info msg="Loading containers: start." Apr 24 23:56:16.929770 kernel: Initializing XFRM netlink socket Apr 24 23:56:17.077730 systemd-networkd[1416]: docker0: Link UP Apr 24 23:56:17.109216 dockerd[2546]: time="2026-04-24T23:56:17.109176439Z" level=info msg="Loading containers: done." Apr 24 23:56:17.161662 dockerd[2546]: time="2026-04-24T23:56:17.161608719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:56:17.161839 dockerd[2546]: time="2026-04-24T23:56:17.161724121Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:56:17.161886 dockerd[2546]: time="2026-04-24T23:56:17.161858623Z" level=info msg="Daemon has completed initialization" Apr 24 23:56:17.227314 dockerd[2546]: time="2026-04-24T23:56:17.226891914Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:56:17.227149 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:56:17.771306 containerd[1846]: time="2026-04-24T23:56:17.770989347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:56:18.586836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2098115552.mount: Deactivated successfully. Apr 24 23:56:20.133390 containerd[1846]: time="2026-04-24T23:56:20.133333217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:20.135705 containerd[1846]: time="2026-04-24T23:56:20.135659652Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193997" Apr 24 23:56:20.138660 containerd[1846]: time="2026-04-24T23:56:20.138609297Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:20.143021 containerd[1846]: time="2026-04-24T23:56:20.142867162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:20.145050 containerd[1846]: time="2026-04-24T23:56:20.144443886Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.373410339s" Apr 24 23:56:20.145050 containerd[1846]: time="2026-04-24T23:56:20.144483786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 23:56:20.145362 containerd[1846]: time="2026-04-24T23:56:20.145330799Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:56:21.726581 containerd[1846]: time="2026-04-24T23:56:21.726474855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:21.729636 containerd[1846]: time="2026-04-24T23:56:21.729550802Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171455" Apr 24 23:56:21.732175 containerd[1846]: time="2026-04-24T23:56:21.732127341Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:21.737168 containerd[1846]: time="2026-04-24T23:56:21.737111617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:21.738309 containerd[1846]: time="2026-04-24T23:56:21.738165133Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.592793033s" Apr 24 23:56:21.738309 containerd[1846]: time="2026-04-24T23:56:21.738204433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 23:56:21.739148 containerd[1846]: time="2026-04-24T23:56:21.739119147Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:56:23.038805 containerd[1846]: time="2026-04-24T23:56:23.038756420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:23.041152 containerd[1846]: time="2026-04-24T23:56:23.041032755Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289764" Apr 24 23:56:23.048275 containerd[1846]: time="2026-04-24T23:56:23.048217864Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:23.057813 containerd[1846]: time="2026-04-24T23:56:23.057636707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:23.059344 containerd[1846]: time="2026-04-24T23:56:23.059167430Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.320002282s" Apr 24 23:56:23.059344 containerd[1846]: time="2026-04-24T23:56:23.059209431Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 23:56:23.060133 containerd[1846]: time="2026-04-24T23:56:23.059875141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:56:23.079157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 24 23:56:23.085293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:23.197914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:23.208716 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:56:23.243659 kubelet[2759]: E0424 23:56:23.243604 2759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:56:23.246164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:56:23.246490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:56:24.868009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886094763.mount: Deactivated successfully. Apr 24 23:56:25.400409 containerd[1846]: time="2026-04-24T23:56:25.400318049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:25.402337 containerd[1846]: time="2026-04-24T23:56:25.402282979Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010719" Apr 24 23:56:25.404852 containerd[1846]: time="2026-04-24T23:56:25.404799417Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:25.408202 containerd[1846]: time="2026-04-24T23:56:25.408142868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:25.408985 containerd[1846]: time="2026-04-24T23:56:25.408836678Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 2.348927536s" Apr 24 23:56:25.408985 containerd[1846]: time="2026-04-24T23:56:25.408873779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 23:56:25.409539 containerd[1846]: time="2026-04-24T23:56:25.409509889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:56:25.931477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081322168.mount: Deactivated successfully. Apr 24 23:56:27.166038 containerd[1846]: time="2026-04-24T23:56:27.165984612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.168040 containerd[1846]: time="2026-04-24T23:56:27.167982742Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Apr 24 23:56:27.171047 containerd[1846]: time="2026-04-24T23:56:27.170994588Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.176734 containerd[1846]: time="2026-04-24T23:56:27.176335569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.177474 containerd[1846]: time="2026-04-24T23:56:27.177437886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.767890397s" Apr 24 23:56:27.177565 containerd[1846]: time="2026-04-24T23:56:27.177480387Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 23:56:27.178369 containerd[1846]: time="2026-04-24T23:56:27.178341700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:56:27.673864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437945509.mount: Deactivated successfully. Apr 24 23:56:27.689817 containerd[1846]: time="2026-04-24T23:56:27.689780264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.696815 containerd[1846]: time="2026-04-24T23:56:27.696754377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Apr 24 23:56:27.700226 containerd[1846]: time="2026-04-24T23:56:27.700083831Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.704590 containerd[1846]: time="2026-04-24T23:56:27.704456502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:27.706253 containerd[1846]: time="2026-04-24T23:56:27.705830024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 527.367123ms" Apr 24 23:56:27.706253 containerd[1846]: time="2026-04-24T23:56:27.705868225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 23:56:27.706932 containerd[1846]: time="2026-04-24T23:56:27.706766739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:56:28.292514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108457647.mount: Deactivated successfully. Apr 24 23:56:29.746721 containerd[1846]: time="2026-04-24T23:56:29.746609319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:29.750337 containerd[1846]: time="2026-04-24T23:56:29.750086575Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719434" Apr 24 23:56:29.753830 containerd[1846]: time="2026-04-24T23:56:29.753381729Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:29.758596 containerd[1846]: time="2026-04-24T23:56:29.758564512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:29.762778 containerd[1846]: time="2026-04-24T23:56:29.762725380Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.055928141s" Apr 24 23:56:29.762868 containerd[1846]: time="2026-04-24T23:56:29.762785381Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 23:56:33.329461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 24 23:56:33.338326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:33.718039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:33.724442 (kubelet)[2927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:56:33.972772 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:34.083473 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:56:34.084040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:34.094297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:34.128317 systemd[1]: Reloading requested from client PID 2943 ('systemctl') (unit session-9.scope)... Apr 24 23:56:34.128335 systemd[1]: Reloading... Apr 24 23:56:34.235767 zram_generator::config[2983]: No configuration found. Apr 24 23:56:34.375654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:56:34.466055 systemd[1]: Reloading finished in 337 ms. Apr 24 23:56:34.514769 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:56:34.514926 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:56:34.515359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:34.523032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:35.323920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:35.329680 (kubelet)[3060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:56:35.363769 kubelet[3060]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:35.363769 kubelet[3060]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:56:35.363769 kubelet[3060]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:35.425021 kubelet[3060]: I0424 23:56:35.424890 3060 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:56:35.565104 kubelet[3060]: I0424 23:56:35.565061 3060 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:56:35.565104 kubelet[3060]: I0424 23:56:35.565090 3060 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:56:35.565451 kubelet[3060]: I0424 23:56:35.565427 3060 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:56:35.592440 kubelet[3060]: E0424 23:56:35.592400 3060 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:56:35.594555 kubelet[3060]: I0424 23:56:35.594134 3060 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:56:35.603183 kubelet[3060]: E0424 23:56:35.603133 3060 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:56:35.603183 kubelet[3060]: I0424 23:56:35.603183 3060 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:56:35.607441 kubelet[3060]: I0424 23:56:35.607414 3060 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:56:35.608513 kubelet[3060]: I0424 23:56:35.608478 3060 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:56:35.608716 kubelet[3060]: I0424 23:56:35.608510 3060 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-bfbb2fd0ff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:56:35.608883 kubelet[3060]: I0424 23:56:35.608721 3060 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:56:35.608883 kubelet[3060]: I0424 23:56:35.608737 3060 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:56:35.608961 kubelet[3060]: I0424 23:56:35.608940 3060 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:35.613042 kubelet[3060]: I0424 23:56:35.613020 3060 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:56:35.613135 kubelet[3060]: I0424 23:56:35.613045 3060 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:56:35.613135 kubelet[3060]: I0424 23:56:35.613076 3060 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:56:35.614951 kubelet[3060]: I0424 23:56:35.614679 3060 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:56:35.621495 kubelet[3060]: E0424 23:56:35.621390 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-bfbb2fd0ff&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:56:35.621875 kubelet[3060]: E0424 23:56:35.621850 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:56:35.621972 kubelet[3060]: I0424 23:56:35.621937 3060 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:56:35.622762 kubelet[3060]: I0424 23:56:35.622634 3060 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:56:35.623767 kubelet[3060]: W0424 23:56:35.623500 3060 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:56:35.628046 kubelet[3060]: I0424 23:56:35.628025 3060 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:56:35.628116 kubelet[3060]: I0424 23:56:35.628081 3060 server.go:1289] "Started kubelet" Apr 24 23:56:35.629506 kubelet[3060]: I0424 23:56:35.629453 3060 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:56:35.634791 kubelet[3060]: E0424 23:56:35.630879 3060 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-bfbb2fd0ff.18a9704b0da36496 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-bfbb2fd0ff,UID:ci-4081.3.6-n-bfbb2fd0ff,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-bfbb2fd0ff,},FirstTimestamp:2026-04-24 23:56:35.628041366 +0000 UTC m=+0.294539519,LastTimestamp:2026-04-24 23:56:35.628041366 +0000 UTC m=+0.294539519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-bfbb2fd0ff,}" Apr 24 23:56:35.634791 kubelet[3060]: I0424 23:56:35.634124 3060 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:56:35.635692 kubelet[3060]: I0424 23:56:35.635673 3060 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:56:35.639121 kubelet[3060]: I0424 23:56:35.639097 3060 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:56:35.639376 kubelet[3060]: E0424 23:56:35.639341 3060 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" Apr 24 23:56:35.640765 kubelet[3060]: I0424 23:56:35.639949 3060 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:56:35.640765 kubelet[3060]: I0424 23:56:35.640195 3060 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:56:35.640765 kubelet[3060]: I0424 23:56:35.640419 3060 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:56:35.642695 kubelet[3060]: E0424 23:56:35.642660 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-bfbb2fd0ff?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Apr 24 23:56:35.643841 kubelet[3060]: I0424 23:56:35.643826 3060 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:56:35.643960 kubelet[3060]: I0424 23:56:35.643951 3060 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:56:35.644197 kubelet[3060]: I0424 23:56:35.644173 3060 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:56:35.644303 kubelet[3060]: I0424 23:56:35.644281 3060 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:56:35.646564 kubelet[3060]: I0424 23:56:35.646548 3060 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:56:35.651476 kubelet[3060]: E0424 23:56:35.651445 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:56:35.673501 kubelet[3060]: I0424 23:56:35.673460 3060 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:56:35.674577 kubelet[3060]: I0424 23:56:35.674549 3060 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:56:35.674680 kubelet[3060]: I0424 23:56:35.674587 3060 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:56:35.674680 kubelet[3060]: I0424 23:56:35.674609 3060 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:56:35.674680 kubelet[3060]: I0424 23:56:35.674618 3060 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:56:35.674680 kubelet[3060]: E0424 23:56:35.674658 3060 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:56:35.679257 kubelet[3060]: E0424 23:56:35.679112 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:56:35.690720 kubelet[3060]: I0424 23:56:35.690695 3060 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:56:35.690720 kubelet[3060]: I0424 23:56:35.690716 3060 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:56:35.690859 kubelet[3060]: I0424 23:56:35.690734 3060 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:35.695235 kubelet[3060]: I0424 23:56:35.695214 3060 policy_none.go:49] "None policy: Start" Apr 24 23:56:35.695235 kubelet[3060]: I0424 23:56:35.695238 3060 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:56:35.695365 kubelet[3060]: I0424 23:56:35.695252 3060 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:56:35.701828 kubelet[3060]: E0424 23:56:35.701804 3060 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:56:35.702001 kubelet[3060]: I0424 23:56:35.701982 3060 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:56:35.702063 kubelet[3060]: I0424 23:56:35.702002 3060 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:56:35.703149 kubelet[3060]: I0424 23:56:35.703122 3060 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:56:35.706043 kubelet[3060]: E0424 23:56:35.705984 3060 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:56:35.706043 kubelet[3060]: E0424 23:56:35.706021 3060 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" Apr 24 23:56:35.785999 kubelet[3060]: E0424 23:56:35.785965 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.790625 kubelet[3060]: E0424 23:56:35.790594 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.797043 kubelet[3060]: E0424 23:56:35.797017 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.803968 kubelet[3060]: I0424 23:56:35.803943 3060 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.804287 kubelet[3060]: E0424 23:56:35.804262 3060 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.844151 kubelet[3060]: E0424 23:56:35.844033 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-bfbb2fd0ff?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Apr 24 23:56:35.845436 kubelet[3060]: I0424 23:56:35.845241 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45d937bc6eafb11082488d12fd37e3fb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"45d937bc6eafb11082488d12fd37e3fb\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.845436 kubelet[3060]: I0424 23:56:35.845301 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.845436 kubelet[3060]: I0424 23:56:35.845373 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.845436 kubelet[3060]: I0424 23:56:35.845400 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.845785 kubelet[3060]: I0424 23:56:35.845677 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45d937bc6eafb11082488d12fd37e3fb-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"45d937bc6eafb11082488d12fd37e3fb\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.845785 kubelet[3060]: I0424 23:56:35.845707 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45d937bc6eafb11082488d12fd37e3fb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"45d937bc6eafb11082488d12fd37e3fb\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.845785 kubelet[3060]: I0424 23:56:35.845734 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.846065 kubelet[3060]: I0424 23:56:35.845793 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:35.846065 kubelet[3060]: I0424 23:56:35.845851 3060 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe89b29e2d93816c2cbdf4eb288deea5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"fe89b29e2d93816c2cbdf4eb288deea5\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:36.006498 kubelet[3060]: I0424 23:56:36.006462 3060 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:36.006939 kubelet[3060]: E0424 23:56:36.006897 3060 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:36.088136 containerd[1846]: time="2026-04-24T23:56:36.088089317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff,Uid:45d937bc6eafb11082488d12fd37e3fb,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:36.093051 containerd[1846]: time="2026-04-24T23:56:36.092613791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff,Uid:173353cda242ca7ee123e6f4f3d037c8,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:36.101042 containerd[1846]: time="2026-04-24T23:56:36.100782625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff,Uid:fe89b29e2d93816c2cbdf4eb288deea5,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:36.245265 kubelet[3060]: E0424 23:56:36.245212 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-bfbb2fd0ff?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Apr 24 23:56:36.408987 kubelet[3060]: I0424 23:56:36.408888 3060 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:36.409553 kubelet[3060]: E0424 23:56:36.409236 3060 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:36.613434 kubelet[3060]: E0424 23:56:36.613395 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:56:36.678167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423433865.mount: Deactivated successfully. Apr 24 23:56:36.701222 containerd[1846]: time="2026-04-24T23:56:36.701176780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:36.704240 containerd[1846]: time="2026-04-24T23:56:36.704194729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 24 23:56:36.707151 containerd[1846]: time="2026-04-24T23:56:36.707115577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:36.709929 containerd[1846]: time="2026-04-24T23:56:36.709895723Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:36.713655 containerd[1846]: time="2026-04-24T23:56:36.713614584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:56:36.716902 containerd[1846]: time="2026-04-24T23:56:36.716869938Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:36.720316 containerd[1846]: time="2026-04-24T23:56:36.720046090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:56:36.724552 containerd[1846]: time="2026-04-24T23:56:36.724514663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:56:36.725302 containerd[1846]: time="2026-04-24T23:56:36.725270875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 632.582983ms" Apr 24 23:56:36.726758 containerd[1846]: time="2026-04-24T23:56:36.726708199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 638.532981ms" Apr 24 23:56:36.730494 containerd[1846]: time="2026-04-24T23:56:36.730458061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.617034ms" Apr 24 23:56:36.861803 kubelet[3060]: E0424 23:56:36.861721 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:56:36.922010 kubelet[3060]: E0424 23:56:36.921968 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-bfbb2fd0ff&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:56:37.046302 kubelet[3060]: E0424 23:56:37.046181 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-bfbb2fd0ff?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Apr 24 23:56:37.161560 kubelet[3060]: E0424 23:56:37.161481 3060 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:56:37.187036 containerd[1846]: time="2026-04-24T23:56:37.185815335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:37.187036 containerd[1846]: time="2026-04-24T23:56:37.186334943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:37.187036 containerd[1846]: time="2026-04-24T23:56:37.186359344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:37.189813 containerd[1846]: time="2026-04-24T23:56:37.188400177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:37.197718 containerd[1846]: time="2026-04-24T23:56:37.196589312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:37.197718 containerd[1846]: time="2026-04-24T23:56:37.196650513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:37.197718 containerd[1846]: time="2026-04-24T23:56:37.196671713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:37.197718 containerd[1846]: time="2026-04-24T23:56:37.196779215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:37.199203 containerd[1846]: time="2026-04-24T23:56:37.198960551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:37.201726 containerd[1846]: time="2026-04-24T23:56:37.201398591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:37.201726 containerd[1846]: time="2026-04-24T23:56:37.201449191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:37.201726 containerd[1846]: time="2026-04-24T23:56:37.201578094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:37.212497 kubelet[3060]: I0424 23:56:37.212365 3060 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:37.213018 kubelet[3060]: E0424 23:56:37.212971 3060 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:37.307694 containerd[1846]: time="2026-04-24T23:56:37.306868222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff,Uid:fe89b29e2d93816c2cbdf4eb288deea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cc191d95dd338313a59c3ccf63bc3de491f5f71bbc005688b6760d3cbba5aff\"" Apr 24 23:56:37.322262 containerd[1846]: time="2026-04-24T23:56:37.322216574Z" level=info msg="CreateContainer within sandbox \"2cc191d95dd338313a59c3ccf63bc3de491f5f71bbc005688b6760d3cbba5aff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:56:37.323362 containerd[1846]: time="2026-04-24T23:56:37.323328092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff,Uid:173353cda242ca7ee123e6f4f3d037c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d7ffb2d6e4370ee5f3c56e66b8e2b6698fac54ed8e3979c4d8e5bc07b0b0492\"" Apr 24 23:56:37.329870 containerd[1846]: time="2026-04-24T23:56:37.329828599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff,Uid:45d937bc6eafb11082488d12fd37e3fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"02c052c7c1ffbd3f5c70e0096ad2a30ef5186c5a5456ce0ff1ccc9fd1feb306b\"" Apr 24 23:56:37.332250 containerd[1846]: time="2026-04-24T23:56:37.332171437Z" level=info msg="CreateContainer within sandbox \"3d7ffb2d6e4370ee5f3c56e66b8e2b6698fac54ed8e3979c4d8e5bc07b0b0492\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:56:37.336709 containerd[1846]: time="2026-04-24T23:56:37.336659611Z" level=info msg="CreateContainer within sandbox \"02c052c7c1ffbd3f5c70e0096ad2a30ef5186c5a5456ce0ff1ccc9fd1feb306b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:56:37.383779 containerd[1846]: time="2026-04-24T23:56:37.383697683Z" level=info msg="CreateContainer within sandbox \"2cc191d95dd338313a59c3ccf63bc3de491f5f71bbc005688b6760d3cbba5aff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83776075bc7723fdf7ee4448e5b272b943e07ebf21dca5e647a9139217d3f8fb\"" Apr 24 23:56:37.387456 containerd[1846]: time="2026-04-24T23:56:37.387364143Z" level=info msg="CreateContainer within sandbox \"3d7ffb2d6e4370ee5f3c56e66b8e2b6698fac54ed8e3979c4d8e5bc07b0b0492\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"872948a870fc0e49e9917ab4d7ef74d5d0ef5ee7d82c430205d41c43417a1f93\"" Apr 24 23:56:37.388665 containerd[1846]: time="2026-04-24T23:56:37.387877251Z" level=info msg="StartContainer for \"83776075bc7723fdf7ee4448e5b272b943e07ebf21dca5e647a9139217d3f8fb\"" Apr 24 23:56:37.393223 containerd[1846]: time="2026-04-24T23:56:37.393188539Z" level=info msg="CreateContainer within sandbox \"02c052c7c1ffbd3f5c70e0096ad2a30ef5186c5a5456ce0ff1ccc9fd1feb306b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da100aef95eb55bbe1394d5175a60c0e1d3c99c1ccf84d374caf7535c536a744\"" Apr 24 23:56:37.393579 containerd[1846]: time="2026-04-24T23:56:37.393541444Z" level=info msg="StartContainer for \"872948a870fc0e49e9917ab4d7ef74d5d0ef5ee7d82c430205d41c43417a1f93\"" Apr 24 23:56:37.398942 containerd[1846]: time="2026-04-24T23:56:37.398906232Z" level=info msg="StartContainer for \"da100aef95eb55bbe1394d5175a60c0e1d3c99c1ccf84d374caf7535c536a744\"" Apr 24 23:56:37.498213 containerd[1846]: time="2026-04-24T23:56:37.498067560Z" level=info msg="StartContainer for \"83776075bc7723fdf7ee4448e5b272b943e07ebf21dca5e647a9139217d3f8fb\" returns successfully" Apr 24 23:56:37.532013 containerd[1846]: time="2026-04-24T23:56:37.531953216Z" level=info msg="StartContainer for \"da100aef95eb55bbe1394d5175a60c0e1d3c99c1ccf84d374caf7535c536a744\" returns successfully" Apr 24 23:56:37.553765 containerd[1846]: time="2026-04-24T23:56:37.553711173Z" level=info msg="StartContainer for \"872948a870fc0e49e9917ab4d7ef74d5d0ef5ee7d82c430205d41c43417a1f93\" returns successfully" Apr 24 23:56:37.695782 kubelet[3060]: E0424 23:56:37.693491 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:37.716800 kubelet[3060]: E0424 23:56:37.712106 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:37.726798 kubelet[3060]: E0424 23:56:37.726291 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:38.719695 kubelet[3060]: E0424 23:56:38.719658 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:38.723765 kubelet[3060]: E0424 23:56:38.722028 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:38.816588 kubelet[3060]: I0424 23:56:38.815878 3060 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:38.895792 kubelet[3060]: E0424 23:56:38.895524 3060 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:38.992049 kubelet[3060]: E0424 23:56:38.991551 3060 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-bfbb2fd0ff\" not found" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.136770 kubelet[3060]: I0424 23:56:39.136437 3060 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.136770 kubelet[3060]: E0424 23:56:39.136480 3060 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-bfbb2fd0ff\": node \"ci-4081.3.6-n-bfbb2fd0ff\" not found" Apr 24 23:56:39.139778 kubelet[3060]: I0424 23:56:39.139735 3060 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.236090 kubelet[3060]: E0424 23:56:39.236044 3060 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.236241 kubelet[3060]: I0424 23:56:39.236130 3060 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.240479 kubelet[3060]: E0424 23:56:39.240449 3060 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.240479 kubelet[3060]: I0424 23:56:39.240475 3060 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.241815 kubelet[3060]: E0424 23:56:39.241787 3060 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:39.624194 kubelet[3060]: I0424 23:56:39.622307 3060 apiserver.go:52] "Watching apiserver" Apr 24 23:56:39.645165 kubelet[3060]: I0424 23:56:39.645091 3060 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:56:40.416022 kubelet[3060]: I0424 23:56:40.415988 3060 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:40.424587 kubelet[3060]: I0424 23:56:40.424301 3060 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:41.379293 systemd[1]: Reloading requested from client PID 3345 ('systemctl') (unit session-9.scope)... Apr 24 23:56:41.379310 systemd[1]: Reloading... Apr 24 23:56:41.469802 zram_generator::config[3386]: No configuration found. Apr 24 23:56:41.603935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:56:41.691906 systemd[1]: Reloading finished in 312 ms. Apr 24 23:56:41.725800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:41.743263 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:56:41.743646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:41.751111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:56:42.017944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:56:42.034154 (kubelet)[3462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:56:42.075771 kubelet[3462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:42.075771 kubelet[3462]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:56:42.075771 kubelet[3462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:56:42.075771 kubelet[3462]: I0424 23:56:42.074849 3462 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:56:42.082622 kubelet[3462]: I0424 23:56:42.082583 3462 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:56:42.082872 kubelet[3462]: I0424 23:56:42.082858 3462 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:56:42.083427 kubelet[3462]: I0424 23:56:42.083408 3462 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:56:42.085157 kubelet[3462]: I0424 23:56:42.085132 3462 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:56:42.087824 kubelet[3462]: I0424 23:56:42.087155 3462 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:56:42.090459 kubelet[3462]: E0424 23:56:42.090414 3462 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:56:42.090553 kubelet[3462]: I0424 23:56:42.090462 3462 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:56:42.094464 kubelet[3462]: I0424 23:56:42.094435 3462 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:56:42.094962 kubelet[3462]: I0424 23:56:42.094921 3462 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:56:42.095126 kubelet[3462]: I0424 23:56:42.094956 3462 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-bfbb2fd0ff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 24 23:56:42.095247 kubelet[3462]: I0424 23:56:42.095131 3462 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:56:42.095247 kubelet[3462]: I0424 23:56:42.095145 3462 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:56:42.095247 kubelet[3462]: I0424 23:56:42.095199 3462 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:42.095392 kubelet[3462]: I0424 23:56:42.095364 3462 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:56:42.095392 kubelet[3462]: I0424 23:56:42.095382 3462 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:56:42.095545 kubelet[3462]: I0424 23:56:42.095412 3462 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:56:42.095545 kubelet[3462]: I0424 23:56:42.095432 3462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:56:42.100767 kubelet[3462]: I0424 23:56:42.099720 3462 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:56:42.100767 kubelet[3462]: I0424 23:56:42.100323 3462 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:56:42.104442 kubelet[3462]: I0424 23:56:42.104428 3462 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:56:42.104595 kubelet[3462]: I0424 23:56:42.104586 3462 server.go:1289] "Started kubelet" Apr 24 23:56:42.108116 kubelet[3462]: I0424 23:56:42.108100 3462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:56:42.119334 kubelet[3462]: I0424 23:56:42.119299 3462 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:56:42.120416 kubelet[3462]: I0424 23:56:42.120389 3462 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:56:42.125042 kubelet[3462]: I0424 23:56:42.124598 3462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:56:42.126588 kubelet[3462]: I0424 23:56:42.126573 3462 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:56:42.128711 kubelet[3462]: I0424 23:56:42.127502 3462 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:56:42.128973 kubelet[3462]: I0424 23:56:42.128960 3462 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:56:42.137152 kubelet[3462]: I0424 23:56:42.134423 3462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:56:42.137236 kubelet[3462]: I0424 23:56:42.137228 3462 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:56:42.142055 kubelet[3462]: I0424 23:56:42.142023 3462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:56:42.142755 kubelet[3462]: I0424 23:56:42.142501 3462 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:56:42.142898 kubelet[3462]: I0424 23:56:42.142731 3462 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:56:42.144448 kubelet[3462]: I0424 23:56:42.144422 3462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:56:42.144448 kubelet[3462]: I0424 23:56:42.144444 3462 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:56:42.144594 kubelet[3462]: I0424 23:56:42.144467 3462 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:56:42.144594 kubelet[3462]: I0424 23:56:42.144476 3462 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:56:42.144594 kubelet[3462]: E0424 23:56:42.144515 3462 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:56:42.147762 kubelet[3462]: E0424 23:56:42.147706 3462 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:56:42.149763 kubelet[3462]: I0424 23:56:42.149692 3462 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218364 3462 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218385 3462 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218410 3462 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218529 3462 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218538 3462 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218553 3462 policy_none.go:49] "None policy: Start" Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218562 3462 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218570 3462 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:56:42.218725 kubelet[3462]: I0424 23:56:42.218640 3462 state_mem.go:75] "Updated machine memory state" Apr 24 23:56:42.220840 kubelet[3462]: E0424 23:56:42.219733 3462 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:56:42.220840 kubelet[3462]: I0424 23:56:42.219940 3462 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:56:42.220840 kubelet[3462]: I0424 23:56:42.219956 3462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:56:42.220840 kubelet[3462]: I0424 23:56:42.220797 3462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:56:42.222882 kubelet[3462]: E0424 23:56:42.222844 3462 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:56:42.247763 kubelet[3462]: I0424 23:56:42.245395 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.247763 kubelet[3462]: I0424 23:56:42.245799 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.247763 kubelet[3462]: I0424 23:56:42.246057 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.254341 kubelet[3462]: I0424 23:56:42.254296 3462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:42.258072 kubelet[3462]: I0424 23:56:42.258034 3462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:42.258301 kubelet[3462]: I0424 23:56:42.258035 3462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:42.258301 kubelet[3462]: E0424 23:56:42.258193 3462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.329320 kubelet[3462]: I0424 23:56:42.329262 3462 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332004 kubelet[3462]: I0424 23:56:42.331974 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332210 kubelet[3462]: I0424 23:56:42.332016 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332210 kubelet[3462]: I0424 23:56:42.332085 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332210 kubelet[3462]: I0424 23:56:42.332118 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45d937bc6eafb11082488d12fd37e3fb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"45d937bc6eafb11082488d12fd37e3fb\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332210 kubelet[3462]: I0424 23:56:42.332143 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45d937bc6eafb11082488d12fd37e3fb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"45d937bc6eafb11082488d12fd37e3fb\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332210 kubelet[3462]: I0424 23:56:42.332164 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332424 kubelet[3462]: I0424 23:56:42.332184 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/173353cda242ca7ee123e6f4f3d037c8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"173353cda242ca7ee123e6f4f3d037c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332424 kubelet[3462]: I0424 23:56:42.332205 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe89b29e2d93816c2cbdf4eb288deea5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"fe89b29e2d93816c2cbdf4eb288deea5\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.332424 kubelet[3462]: I0424 23:56:42.332226 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45d937bc6eafb11082488d12fd37e3fb-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" (UID: \"45d937bc6eafb11082488d12fd37e3fb\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.341843 kubelet[3462]: I0424 23:56:42.341814 3462 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:42.341938 kubelet[3462]: I0424 23:56:42.341888 3462 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:43.102895 kubelet[3462]: I0424 23:56:43.102837 3462 apiserver.go:52] "Watching apiserver" Apr 24 23:56:43.129325 kubelet[3462]: I0424 23:56:43.129285 3462 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:56:43.191776 kubelet[3462]: I0424 23:56:43.191692 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:43.192036 kubelet[3462]: I0424 23:56:43.192001 3462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:43.204698 kubelet[3462]: I0424 23:56:43.204669 3462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:43.205024 kubelet[3462]: E0424 23:56:43.204730 3462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:43.205024 kubelet[3462]: I0424 23:56:43.204941 3462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:56:43.205024 kubelet[3462]: E0424 23:56:43.204982 3462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:56:43.241161 kubelet[3462]: I0424 23:56:43.239253 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-bfbb2fd0ff" podStartSLOduration=1.239233695 podStartE2EDuration="1.239233695s" podCreationTimestamp="2026-04-24 23:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:43.222068214 +0000 UTC m=+1.183195722" watchObservedRunningTime="2026-04-24 23:56:43.239233695 +0000 UTC m=+1.200361303" Apr 24 23:56:43.241161 kubelet[3462]: I0424 23:56:43.239380 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-bfbb2fd0ff" podStartSLOduration=3.239373598 podStartE2EDuration="3.239373598s" podCreationTimestamp="2026-04-24 23:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:43.238971691 +0000 UTC m=+1.200099199" watchObservedRunningTime="2026-04-24 23:56:43.239373598 +0000 UTC m=+1.200501206" Apr 24 23:56:43.270852 kubelet[3462]: I0424 23:56:43.270454 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-bfbb2fd0ff" podStartSLOduration=1.270416307 podStartE2EDuration="1.270416307s" podCreationTimestamp="2026-04-24 23:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:43.257446494 +0000 UTC m=+1.218574002" watchObservedRunningTime="2026-04-24 23:56:43.270416307 +0000 UTC m=+1.231543915" Apr 24 23:56:47.926235 kubelet[3462]: I0424 23:56:47.926193 3462 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:56:47.926972 containerd[1846]: time="2026-04-24T23:56:47.926775715Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:56:47.927841 kubelet[3462]: I0424 23:56:47.927076 3462 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:56:48.975235 kubelet[3462]: I0424 23:56:48.975193 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lkm\" (UniqueName: \"kubernetes.io/projected/3a73f036-c8f0-4da7-a7d5-1a54775f36a5-kube-api-access-v5lkm\") pod \"kube-proxy-sn9pv\" (UID: \"3a73f036-c8f0-4da7-a7d5-1a54775f36a5\") " pod="kube-system/kube-proxy-sn9pv" Apr 24 23:56:48.975235 kubelet[3462]: I0424 23:56:48.975234 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a73f036-c8f0-4da7-a7d5-1a54775f36a5-kube-proxy\") pod \"kube-proxy-sn9pv\" (UID: \"3a73f036-c8f0-4da7-a7d5-1a54775f36a5\") " pod="kube-system/kube-proxy-sn9pv" Apr 24 23:56:48.975791 kubelet[3462]: I0424 23:56:48.975257 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a73f036-c8f0-4da7-a7d5-1a54775f36a5-xtables-lock\") pod \"kube-proxy-sn9pv\" (UID: \"3a73f036-c8f0-4da7-a7d5-1a54775f36a5\") " pod="kube-system/kube-proxy-sn9pv" Apr 24 23:56:48.975791 kubelet[3462]: I0424 23:56:48.975278 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a73f036-c8f0-4da7-a7d5-1a54775f36a5-lib-modules\") pod \"kube-proxy-sn9pv\" (UID: \"3a73f036-c8f0-4da7-a7d5-1a54775f36a5\") " pod="kube-system/kube-proxy-sn9pv" Apr 24 23:56:49.176765 kubelet[3462]: I0424 23:56:49.176712 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptscv\" (UniqueName: \"kubernetes.io/projected/510e007e-b01d-4fa1-8c47-78555fc94ac0-kube-api-access-ptscv\") pod \"tigera-operator-6bf85f8dd-lgmfb\" (UID: \"510e007e-b01d-4fa1-8c47-78555fc94ac0\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lgmfb" Apr 24 23:56:49.176897 kubelet[3462]: I0424 23:56:49.176783 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/510e007e-b01d-4fa1-8c47-78555fc94ac0-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-lgmfb\" (UID: \"510e007e-b01d-4fa1-8c47-78555fc94ac0\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lgmfb" Apr 24 23:56:49.269968 containerd[1846]: time="2026-04-24T23:56:49.269848556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sn9pv,Uid:3a73f036-c8f0-4da7-a7d5-1a54775f36a5,Namespace:kube-system,Attempt:0,}" Apr 24 23:56:49.315887 containerd[1846]: time="2026-04-24T23:56:49.315452494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:49.315887 containerd[1846]: time="2026-04-24T23:56:49.315523895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:49.315887 containerd[1846]: time="2026-04-24T23:56:49.315551696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:49.315887 containerd[1846]: time="2026-04-24T23:56:49.315662797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:49.359478 containerd[1846]: time="2026-04-24T23:56:49.359433406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sn9pv,Uid:3a73f036-c8f0-4da7-a7d5-1a54775f36a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd99ae3f969e5acd07cfc78ea63dc4444089be61092b746f65eecc1b81aa6c2c\"" Apr 24 23:56:49.369012 containerd[1846]: time="2026-04-24T23:56:49.368867959Z" level=info msg="CreateContainer within sandbox \"cd99ae3f969e5acd07cfc78ea63dc4444089be61092b746f65eecc1b81aa6c2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:56:49.398370 containerd[1846]: time="2026-04-24T23:56:49.398335336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lgmfb,Uid:510e007e-b01d-4fa1-8c47-78555fc94ac0,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:56:49.399412 containerd[1846]: time="2026-04-24T23:56:49.399375953Z" level=info msg="CreateContainer within sandbox \"cd99ae3f969e5acd07cfc78ea63dc4444089be61092b746f65eecc1b81aa6c2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dec8a1bba69d99dfbe12e621bc5be995d7962e0e6efb18a239618ac7c21bb640\"" Apr 24 23:56:49.399968 containerd[1846]: time="2026-04-24T23:56:49.399937462Z" level=info msg="StartContainer for \"dec8a1bba69d99dfbe12e621bc5be995d7962e0e6efb18a239618ac7c21bb640\"" Apr 24 23:56:49.453763 containerd[1846]: time="2026-04-24T23:56:49.453625631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:56:49.454062 containerd[1846]: time="2026-04-24T23:56:49.453699632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:56:49.454062 containerd[1846]: time="2026-04-24T23:56:49.453771733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:49.454062 containerd[1846]: time="2026-04-24T23:56:49.453880735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:56:49.478518 containerd[1846]: time="2026-04-24T23:56:49.478068426Z" level=info msg="StartContainer for \"dec8a1bba69d99dfbe12e621bc5be995d7962e0e6efb18a239618ac7c21bb640\" returns successfully" Apr 24 23:56:49.540935 containerd[1846]: time="2026-04-24T23:56:49.540769141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lgmfb,Uid:510e007e-b01d-4fa1-8c47-78555fc94ac0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7cd2eed61a2d74b575dd9d6ee930fd5d6aafbf3bab5cbe4f58300127e2636440\"" Apr 24 23:56:49.545190 containerd[1846]: time="2026-04-24T23:56:49.545150612Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:56:50.109963 systemd[1]: run-containerd-runc-k8s.io-cd99ae3f969e5acd07cfc78ea63dc4444089be61092b746f65eecc1b81aa6c2c-runc.H9AGZH.mount: Deactivated successfully. Apr 24 23:56:50.833508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476572403.mount: Deactivated successfully. Apr 24 23:56:52.164927 containerd[1846]: time="2026-04-24T23:56:52.164873758Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:52.167445 containerd[1846]: time="2026-04-24T23:56:52.167286691Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:56:52.170399 containerd[1846]: time="2026-04-24T23:56:52.170124330Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:52.174323 containerd[1846]: time="2026-04-24T23:56:52.174118584Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:56:52.174860 containerd[1846]: time="2026-04-24T23:56:52.174824994Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.629628681s" Apr 24 23:56:52.174955 containerd[1846]: time="2026-04-24T23:56:52.174865594Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:56:52.182507 containerd[1846]: time="2026-04-24T23:56:52.182475098Z" level=info msg="CreateContainer within sandbox \"7cd2eed61a2d74b575dd9d6ee930fd5d6aafbf3bab5cbe4f58300127e2636440\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:56:52.214016 containerd[1846]: time="2026-04-24T23:56:52.213975129Z" level=info msg="CreateContainer within sandbox \"7cd2eed61a2d74b575dd9d6ee930fd5d6aafbf3bab5cbe4f58300127e2636440\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf6dab9b5c9001388cc6571c8959ce946aada97323cb4533e74826f57e32569a\"" Apr 24 23:56:52.214582 containerd[1846]: time="2026-04-24T23:56:52.214542237Z" level=info msg="StartContainer for \"bf6dab9b5c9001388cc6571c8959ce946aada97323cb4533e74826f57e32569a\"" Apr 24 23:56:52.254328 systemd[1]: run-containerd-runc-k8s.io-bf6dab9b5c9001388cc6571c8959ce946aada97323cb4533e74826f57e32569a-runc.ju06Dn.mount: Deactivated successfully. Apr 24 23:56:52.296281 containerd[1846]: time="2026-04-24T23:56:52.295751046Z" level=info msg="StartContainer for \"bf6dab9b5c9001388cc6571c8959ce946aada97323cb4533e74826f57e32569a\" returns successfully" Apr 24 23:56:53.235761 kubelet[3462]: I0424 23:56:53.235673 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sn9pv" podStartSLOduration=5.235654391 podStartE2EDuration="5.235654391s" podCreationTimestamp="2026-04-24 23:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:56:50.223514793 +0000 UTC m=+8.184642401" watchObservedRunningTime="2026-04-24 23:56:53.235654391 +0000 UTC m=+11.196781899" Apr 24 23:56:53.236259 kubelet[3462]: I0424 23:56:53.235846 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-lgmfb" podStartSLOduration=1.6031154650000001 podStartE2EDuration="4.235836293s" podCreationTimestamp="2026-04-24 23:56:49 +0000 UTC" firstStartedPulling="2026-04-24 23:56:49.543267782 +0000 UTC m=+7.504395290" lastFinishedPulling="2026-04-24 23:56:52.17598861 +0000 UTC m=+10.137116118" observedRunningTime="2026-04-24 23:56:53.23561809 +0000 UTC m=+11.196745698" watchObservedRunningTime="2026-04-24 23:56:53.235836293 +0000 UTC m=+11.196963801" Apr 24 23:56:58.830535 sudo[2530]: pam_unix(sudo:session): session closed for user root Apr 24 23:56:58.849257 sshd[2526]: pam_unix(sshd:session): session closed for user core Apr 24 23:56:58.858463 systemd[1]: sshd@6-10.0.0.31:22-4.175.71.9:37908.service: Deactivated successfully. Apr 24 23:56:58.870948 systemd-logind[1810]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:56:58.872909 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:56:58.874765 systemd-logind[1810]: Removed session 9. Apr 24 23:57:02.669769 kubelet[3462]: I0424 23:57:02.669498 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c62c7b1-f499-463a-a280-49aa0759e340-tigera-ca-bundle\") pod \"calico-typha-96cdbc97-spckw\" (UID: \"2c62c7b1-f499-463a-a280-49aa0759e340\") " pod="calico-system/calico-typha-96cdbc97-spckw" Apr 24 23:57:02.669769 kubelet[3462]: I0424 23:57:02.669571 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c62c7b1-f499-463a-a280-49aa0759e340-typha-certs\") pod \"calico-typha-96cdbc97-spckw\" (UID: \"2c62c7b1-f499-463a-a280-49aa0759e340\") " pod="calico-system/calico-typha-96cdbc97-spckw" Apr 24 23:57:02.669769 kubelet[3462]: I0424 23:57:02.669609 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbtx\" (UniqueName: \"kubernetes.io/projected/2c62c7b1-f499-463a-a280-49aa0759e340-kube-api-access-qzbtx\") pod \"calico-typha-96cdbc97-spckw\" (UID: \"2c62c7b1-f499-463a-a280-49aa0759e340\") " pod="calico-system/calico-typha-96cdbc97-spckw" Apr 24 23:57:02.871105 kubelet[3462]: I0424 23:57:02.870990 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-lib-modules\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871105 kubelet[3462]: I0424 23:57:02.871050 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-xtables-lock\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871105 kubelet[3462]: I0424 23:57:02.871075 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-cni-net-dir\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871105 kubelet[3462]: I0424 23:57:02.871096 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-flexvol-driver-host\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871400 kubelet[3462]: I0424 23:57:02.871122 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-nodeproc\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871400 kubelet[3462]: I0424 23:57:02.871144 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-bpffs\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871400 kubelet[3462]: I0424 23:57:02.871163 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-cni-log-dir\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871400 kubelet[3462]: I0424 23:57:02.871182 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-node-certs\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871400 kubelet[3462]: I0424 23:57:02.871204 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-sys-fs\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871400 kubelet[3462]: I0424 23:57:02.871222 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-tigera-ca-bundle\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871596 kubelet[3462]: I0424 23:57:02.871245 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-var-lib-calico\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871596 kubelet[3462]: I0424 23:57:02.871267 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz57\" (UniqueName: \"kubernetes.io/projected/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-kube-api-access-jhz57\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871596 kubelet[3462]: I0424 23:57:02.871293 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-var-run-calico\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871596 kubelet[3462]: I0424 23:57:02.871317 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-cni-bin-dir\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.871596 kubelet[3462]: I0424 23:57:02.871345 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d90c53c0-f7c2-4ab4-ad89-5348e22c934f-policysync\") pod \"calico-node-2p7l8\" (UID: \"d90c53c0-f7c2-4ab4-ad89-5348e22c934f\") " pod="calico-system/calico-node-2p7l8" Apr 24 23:57:02.893016 kubelet[3462]: E0424 23:57:02.892961 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:02.950994 containerd[1846]: time="2026-04-24T23:57:02.950859965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-96cdbc97-spckw,Uid:2c62c7b1-f499-463a-a280-49aa0759e340,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:02.971575 kubelet[3462]: I0424 23:57:02.971526 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c5fd00b-3814-4bd0-8192-1d2f719f9517-kubelet-dir\") pod \"csi-node-driver-vghcg\" (UID: \"8c5fd00b-3814-4bd0-8192-1d2f719f9517\") " pod="calico-system/csi-node-driver-vghcg" Apr 24 23:57:02.972322 kubelet[3462]: I0424 23:57:02.972299 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759jl\" (UniqueName: \"kubernetes.io/projected/8c5fd00b-3814-4bd0-8192-1d2f719f9517-kube-api-access-759jl\") pod \"csi-node-driver-vghcg\" (UID: \"8c5fd00b-3814-4bd0-8192-1d2f719f9517\") " pod="calico-system/csi-node-driver-vghcg" Apr 24 23:57:02.972426 kubelet[3462]: I0424 23:57:02.972337 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c5fd00b-3814-4bd0-8192-1d2f719f9517-socket-dir\") pod \"csi-node-driver-vghcg\" (UID: \"8c5fd00b-3814-4bd0-8192-1d2f719f9517\") " pod="calico-system/csi-node-driver-vghcg" Apr 24 23:57:02.972426 kubelet[3462]: I0424 23:57:02.972360 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8c5fd00b-3814-4bd0-8192-1d2f719f9517-varrun\") pod \"csi-node-driver-vghcg\" (UID: \"8c5fd00b-3814-4bd0-8192-1d2f719f9517\") " pod="calico-system/csi-node-driver-vghcg" Apr 24 23:57:02.972426 kubelet[3462]: I0424 23:57:02.972398 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c5fd00b-3814-4bd0-8192-1d2f719f9517-registration-dir\") pod \"csi-node-driver-vghcg\" (UID: \"8c5fd00b-3814-4bd0-8192-1d2f719f9517\") " pod="calico-system/csi-node-driver-vghcg" Apr 24 23:57:02.975813 kubelet[3462]: E0424 23:57:02.975786 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.975813 kubelet[3462]: W0424 23:57:02.975807 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.975957 kubelet[3462]: E0424 23:57:02.975828 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.978757 kubelet[3462]: E0424 23:57:02.976874 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.978757 kubelet[3462]: W0424 23:57:02.976892 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.978757 kubelet[3462]: E0424 23:57:02.976909 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.983430 kubelet[3462]: E0424 23:57:02.981311 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.983430 kubelet[3462]: W0424 23:57:02.981326 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.983430 kubelet[3462]: E0424 23:57:02.981364 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.983430 kubelet[3462]: E0424 23:57:02.981919 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.983430 kubelet[3462]: W0424 23:57:02.981932 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.983430 kubelet[3462]: E0424 23:57:02.981945 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.983430 kubelet[3462]: E0424 23:57:02.983015 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.983430 kubelet[3462]: W0424 23:57:02.983124 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.983430 kubelet[3462]: E0424 23:57:02.983137 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.984877 kubelet[3462]: E0424 23:57:02.984829 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.984877 kubelet[3462]: W0424 23:57:02.984865 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.985038 kubelet[3462]: E0424 23:57:02.984886 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.985373 kubelet[3462]: E0424 23:57:02.985355 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.985373 kubelet[3462]: W0424 23:57:02.985372 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.985485 kubelet[3462]: E0424 23:57:02.985386 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.988321 kubelet[3462]: E0424 23:57:02.987793 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.988321 kubelet[3462]: W0424 23:57:02.987809 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.988321 kubelet[3462]: E0424 23:57:02.987822 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.988845 kubelet[3462]: E0424 23:57:02.988638 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.988845 kubelet[3462]: W0424 23:57:02.988653 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.988845 kubelet[3462]: E0424 23:57:02.988667 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.991252 kubelet[3462]: E0424 23:57:02.991226 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.993358 kubelet[3462]: W0424 23:57:02.992847 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.993358 kubelet[3462]: E0424 23:57:02.992876 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:02.995525 kubelet[3462]: E0424 23:57:02.993551 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:02.995525 kubelet[3462]: W0424 23:57:02.993563 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:02.999451 kubelet[3462]: E0424 23:57:02.995655 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.000694 kubelet[3462]: E0424 23:57:03.000594 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.000694 kubelet[3462]: W0424 23:57:03.000610 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.000694 kubelet[3462]: E0424 23:57:03.000646 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.001260 kubelet[3462]: E0424 23:57:03.001238 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.001260 kubelet[3462]: W0424 23:57:03.001259 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.001370 kubelet[3462]: E0424 23:57:03.001274 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.001556 kubelet[3462]: E0424 23:57:03.001541 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.001628 kubelet[3462]: W0424 23:57:03.001557 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.001628 kubelet[3462]: E0424 23:57:03.001570 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.021144 containerd[1846]: time="2026-04-24T23:57:03.021060291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:03.021144 containerd[1846]: time="2026-04-24T23:57:03.021116892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:03.021982 containerd[1846]: time="2026-04-24T23:57:03.021930505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:03.022105 containerd[1846]: time="2026-04-24T23:57:03.022047407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:03.074014 kubelet[3462]: E0424 23:57:03.073977 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.074014 kubelet[3462]: W0424 23:57:03.074009 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.074234 kubelet[3462]: E0424 23:57:03.074037 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.074573 kubelet[3462]: E0424 23:57:03.074550 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.074769 kubelet[3462]: W0424 23:57:03.074753 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.074853 kubelet[3462]: E0424 23:57:03.074779 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.076759 kubelet[3462]: E0424 23:57:03.075149 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.076759 kubelet[3462]: W0424 23:57:03.075167 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.076759 kubelet[3462]: E0424 23:57:03.075185 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.076759 kubelet[3462]: E0424 23:57:03.075432 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.076759 kubelet[3462]: W0424 23:57:03.075447 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.076759 kubelet[3462]: E0424 23:57:03.075471 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.076759 kubelet[3462]: E0424 23:57:03.075697 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.076759 kubelet[3462]: W0424 23:57:03.075708 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.076759 kubelet[3462]: E0424 23:57:03.075735 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.076759 kubelet[3462]: E0424 23:57:03.076666 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.077209 kubelet[3462]: W0424 23:57:03.076695 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.077209 kubelet[3462]: E0424 23:57:03.076715 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.078679 kubelet[3462]: E0424 23:57:03.078650 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.078786 kubelet[3462]: W0424 23:57:03.078683 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.078786 kubelet[3462]: E0424 23:57:03.078698 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.079604 kubelet[3462]: E0424 23:57:03.079571 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.079696 kubelet[3462]: W0424 23:57:03.079633 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.079696 kubelet[3462]: E0424 23:57:03.079650 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.083590 kubelet[3462]: E0424 23:57:03.083569 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.083590 kubelet[3462]: W0424 23:57:03.083588 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.083910 kubelet[3462]: E0424 23:57:03.083603 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.084291 kubelet[3462]: E0424 23:57:03.084272 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.084291 kubelet[3462]: W0424 23:57:03.084289 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.084404 kubelet[3462]: E0424 23:57:03.084317 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.085312 kubelet[3462]: E0424 23:57:03.085283 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.085312 kubelet[3462]: W0424 23:57:03.085311 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.085437 kubelet[3462]: E0424 23:57:03.085325 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.086058 kubelet[3462]: E0424 23:57:03.086042 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.086494 kubelet[3462]: W0424 23:57:03.086476 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.086800 kubelet[3462]: E0424 23:57:03.086784 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.087298 containerd[1846]: time="2026-04-24T23:57:03.087260452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-96cdbc97-spckw,Uid:2c62c7b1-f499-463a-a280-49aa0759e340,Namespace:calico-system,Attempt:0,} returns sandbox id \"eed75c5796304f5040844233caa029c06840fefeba088d1aa6fa7552f35ff18c\"" Apr 24 23:57:03.087799 kubelet[3462]: E0424 23:57:03.087781 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.087799 kubelet[3462]: W0424 23:57:03.087799 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.088042 kubelet[3462]: E0424 23:57:03.087825 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.088207 kubelet[3462]: E0424 23:57:03.088075 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.088207 kubelet[3462]: W0424 23:57:03.088086 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.088207 kubelet[3462]: E0424 23:57:03.088098 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.088466 kubelet[3462]: E0424 23:57:03.088381 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.088466 kubelet[3462]: W0424 23:57:03.088393 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.088466 kubelet[3462]: E0424 23:57:03.088419 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.089160 kubelet[3462]: E0424 23:57:03.088687 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.089160 kubelet[3462]: W0424 23:57:03.088699 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.089160 kubelet[3462]: E0424 23:57:03.088713 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.089671 kubelet[3462]: E0424 23:57:03.089654 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.089671 kubelet[3462]: W0424 23:57:03.089669 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.090086 kubelet[3462]: E0424 23:57:03.089685 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.090931 kubelet[3462]: E0424 23:57:03.090912 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.091520 kubelet[3462]: W0424 23:57:03.091289 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.091520 kubelet[3462]: E0424 23:57:03.091316 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.091795 kubelet[3462]: E0424 23:57:03.091700 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.091795 kubelet[3462]: W0424 23:57:03.091713 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.091795 kubelet[3462]: E0424 23:57:03.091778 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.093025 kubelet[3462]: E0424 23:57:03.092993 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.093025 kubelet[3462]: W0424 23:57:03.093008 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.093149 containerd[1846]: time="2026-04-24T23:57:03.093124946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:57:03.093799 kubelet[3462]: E0424 23:57:03.093775 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.094595 kubelet[3462]: E0424 23:57:03.094564 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.094707 kubelet[3462]: W0424 23:57:03.094592 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.094808 kubelet[3462]: E0424 23:57:03.094713 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.095117 kubelet[3462]: E0424 23:57:03.095098 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.095117 kubelet[3462]: W0424 23:57:03.095113 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.095244 kubelet[3462]: E0424 23:57:03.095126 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.095478 containerd[1846]: time="2026-04-24T23:57:03.095449483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2p7l8,Uid:d90c53c0-f7c2-4ab4-ad89-5348e22c934f,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:03.095756 kubelet[3462]: E0424 23:57:03.095716 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.095841 kubelet[3462]: W0424 23:57:03.095751 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.095841 kubelet[3462]: E0424 23:57:03.095772 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.096691 kubelet[3462]: E0424 23:57:03.096644 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.096691 kubelet[3462]: W0424 23:57:03.096679 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.096691 kubelet[3462]: E0424 23:57:03.096693 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.097102 kubelet[3462]: E0424 23:57:03.097043 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.097102 kubelet[3462]: W0424 23:57:03.097055 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.097102 kubelet[3462]: E0424 23:57:03.097068 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.097446 kubelet[3462]: E0424 23:57:03.097428 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:03.097446 kubelet[3462]: W0424 23:57:03.097444 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:03.097544 kubelet[3462]: E0424 23:57:03.097466 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:03.141080 containerd[1846]: time="2026-04-24T23:57:03.140824411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:03.141080 containerd[1846]: time="2026-04-24T23:57:03.140909912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:03.141080 containerd[1846]: time="2026-04-24T23:57:03.140962213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:03.141991 containerd[1846]: time="2026-04-24T23:57:03.141579923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:03.183192 containerd[1846]: time="2026-04-24T23:57:03.183147389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2p7l8,Uid:d90c53c0-f7c2-4ab4-ad89-5348e22c934f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\"" Apr 24 23:57:05.145035 kubelet[3462]: E0424 23:57:05.144979 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:07.145532 kubelet[3462]: E0424 23:57:07.145479 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:09.145094 kubelet[3462]: E0424 23:57:09.145028 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:11.144845 kubelet[3462]: E0424 23:57:11.144784 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:11.725046 systemd[1]: Started sshd@7-10.0.0.31:22-43.160.206.89:47206.service - OpenSSH per-connection server daemon (43.160.206.89:47206). Apr 24 23:57:11.733790 sshd[3987]: Connection closed by 43.160.206.89 port 47206 Apr 24 23:57:11.734318 systemd[1]: sshd@7-10.0.0.31:22-43.160.206.89:47206.service: Deactivated successfully. Apr 24 23:57:13.145330 kubelet[3462]: E0424 23:57:13.145270 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:14.391221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791145565.mount: Deactivated successfully. Apr 24 23:57:15.083988 containerd[1846]: time="2026-04-24T23:57:15.083933389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:15.086783 containerd[1846]: time="2026-04-24T23:57:15.086245826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:57:15.090291 containerd[1846]: time="2026-04-24T23:57:15.089972186Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:15.094635 containerd[1846]: time="2026-04-24T23:57:15.094601961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:15.095341 containerd[1846]: time="2026-04-24T23:57:15.095303672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 12.002141726s" Apr 24 23:57:15.095428 containerd[1846]: time="2026-04-24T23:57:15.095345773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:57:15.097255 containerd[1846]: time="2026-04-24T23:57:15.097226803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:57:15.132421 containerd[1846]: time="2026-04-24T23:57:15.132378169Z" level=info msg="CreateContainer within sandbox \"eed75c5796304f5040844233caa029c06840fefeba088d1aa6fa7552f35ff18c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:57:15.145225 kubelet[3462]: E0424 23:57:15.145177 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:15.163977 containerd[1846]: time="2026-04-24T23:57:15.163933977Z" level=info msg="CreateContainer within sandbox \"eed75c5796304f5040844233caa029c06840fefeba088d1aa6fa7552f35ff18c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8d2aa07e021500e991a664238dd3b9853637463a23c61e3fa6a4e8d1f5f419d3\"" Apr 24 23:57:15.164647 containerd[1846]: time="2026-04-24T23:57:15.164590288Z" level=info msg="StartContainer for \"8d2aa07e021500e991a664238dd3b9853637463a23c61e3fa6a4e8d1f5f419d3\"" Apr 24 23:57:15.240017 containerd[1846]: time="2026-04-24T23:57:15.239968101Z" level=info msg="StartContainer for \"8d2aa07e021500e991a664238dd3b9853637463a23c61e3fa6a4e8d1f5f419d3\" returns successfully" Apr 24 23:57:15.286796 kubelet[3462]: I0424 23:57:15.286671 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-96cdbc97-spckw" podStartSLOduration=1.28222219 podStartE2EDuration="13.286650553s" podCreationTimestamp="2026-04-24 23:57:02 +0000 UTC" firstStartedPulling="2026-04-24 23:57:03.091953427 +0000 UTC m=+21.053081035" lastFinishedPulling="2026-04-24 23:57:15.09638179 +0000 UTC m=+33.057509398" observedRunningTime="2026-04-24 23:57:15.284693121 +0000 UTC m=+33.245820629" watchObservedRunningTime="2026-04-24 23:57:15.286650553 +0000 UTC m=+33.247778061" Apr 24 23:57:15.344093 kubelet[3462]: E0424 23:57:15.343881 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.344237 kubelet[3462]: W0424 23:57:15.344125 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.344237 kubelet[3462]: E0424 23:57:15.344166 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.346396 kubelet[3462]: E0424 23:57:15.345939 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.346396 kubelet[3462]: W0424 23:57:15.345974 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.346396 kubelet[3462]: E0424 23:57:15.345994 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.346605 kubelet[3462]: E0424 23:57:15.346443 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.346605 kubelet[3462]: W0424 23:57:15.346456 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.346605 kubelet[3462]: E0424 23:57:15.346472 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.348496 kubelet[3462]: E0424 23:57:15.347970 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.348496 kubelet[3462]: W0424 23:57:15.348000 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.348496 kubelet[3462]: E0424 23:57:15.348016 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.356762 kubelet[3462]: E0424 23:57:15.352342 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.356762 kubelet[3462]: W0424 23:57:15.352360 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.356762 kubelet[3462]: E0424 23:57:15.352375 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.357362 kubelet[3462]: E0424 23:57:15.357341 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.357362 kubelet[3462]: W0424 23:57:15.357358 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.357507 kubelet[3462]: E0424 23:57:15.357373 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.359315 kubelet[3462]: E0424 23:57:15.359295 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.359315 kubelet[3462]: W0424 23:57:15.359314 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.359449 kubelet[3462]: E0424 23:57:15.359332 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.364838 kubelet[3462]: E0424 23:57:15.364817 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.364838 kubelet[3462]: W0424 23:57:15.364838 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.364970 kubelet[3462]: E0424 23:57:15.364853 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.365674 kubelet[3462]: E0424 23:57:15.365652 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.365674 kubelet[3462]: W0424 23:57:15.365672 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.365823 kubelet[3462]: E0424 23:57:15.365688 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.368919 kubelet[3462]: E0424 23:57:15.368897 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.369014 kubelet[3462]: W0424 23:57:15.368940 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.369014 kubelet[3462]: E0424 23:57:15.368959 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.370883 kubelet[3462]: E0424 23:57:15.370864 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.370883 kubelet[3462]: W0424 23:57:15.370882 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.371032 kubelet[3462]: E0424 23:57:15.370898 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.371841 kubelet[3462]: E0424 23:57:15.371820 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.371841 kubelet[3462]: W0424 23:57:15.371839 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.371975 kubelet[3462]: E0424 23:57:15.371854 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.372761 kubelet[3462]: E0424 23:57:15.372497 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.372761 kubelet[3462]: W0424 23:57:15.372513 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.372761 kubelet[3462]: E0424 23:57:15.372526 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.373883 kubelet[3462]: E0424 23:57:15.373513 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.373883 kubelet[3462]: W0424 23:57:15.373531 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.373883 kubelet[3462]: E0424 23:57:15.373546 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.376000 kubelet[3462]: E0424 23:57:15.375981 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.376764 kubelet[3462]: W0424 23:57:15.376097 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.376764 kubelet[3462]: E0424 23:57:15.376119 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.380429 kubelet[3462]: E0424 23:57:15.379913 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.380429 kubelet[3462]: W0424 23:57:15.379929 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.380429 kubelet[3462]: E0424 23:57:15.379944 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.381988 kubelet[3462]: E0424 23:57:15.381873 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.381988 kubelet[3462]: W0424 23:57:15.381889 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.381988 kubelet[3462]: E0424 23:57:15.381903 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.384785 kubelet[3462]: E0424 23:57:15.382312 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.384785 kubelet[3462]: W0424 23:57:15.382343 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.384785 kubelet[3462]: E0424 23:57:15.382357 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.384785 kubelet[3462]: E0424 23:57:15.382796 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.384785 kubelet[3462]: W0424 23:57:15.382809 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.384785 kubelet[3462]: E0424 23:57:15.382823 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.384785 kubelet[3462]: E0424 23:57:15.383516 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.384785 kubelet[3462]: W0424 23:57:15.383551 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.384785 kubelet[3462]: E0424 23:57:15.383565 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.385997 kubelet[3462]: E0424 23:57:15.385194 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.385997 kubelet[3462]: W0424 23:57:15.385210 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.385997 kubelet[3462]: E0424 23:57:15.385224 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.387192 kubelet[3462]: E0424 23:57:15.386951 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.387192 kubelet[3462]: W0424 23:57:15.386963 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.387192 kubelet[3462]: E0424 23:57:15.386977 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.387648 kubelet[3462]: E0424 23:57:15.387597 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.387648 kubelet[3462]: W0424 23:57:15.387612 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.390605 kubelet[3462]: E0424 23:57:15.387728 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.390605 kubelet[3462]: E0424 23:57:15.388386 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.390605 kubelet[3462]: W0424 23:57:15.388418 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.390605 kubelet[3462]: E0424 23:57:15.388433 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.390605 kubelet[3462]: E0424 23:57:15.389042 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.390605 kubelet[3462]: W0424 23:57:15.389054 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.390605 kubelet[3462]: E0424 23:57:15.389068 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.391538 kubelet[3462]: E0424 23:57:15.390799 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.391538 kubelet[3462]: W0424 23:57:15.390813 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.391538 kubelet[3462]: E0424 23:57:15.390827 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.392038 kubelet[3462]: E0424 23:57:15.391735 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.392038 kubelet[3462]: W0424 23:57:15.391802 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.392038 kubelet[3462]: E0424 23:57:15.391817 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.392477 kubelet[3462]: E0424 23:57:15.392223 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.392477 kubelet[3462]: W0424 23:57:15.392236 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.392477 kubelet[3462]: E0424 23:57:15.392249 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.393151 kubelet[3462]: E0424 23:57:15.392974 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.393151 kubelet[3462]: W0424 23:57:15.392988 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.393151 kubelet[3462]: E0424 23:57:15.393002 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.394710 kubelet[3462]: E0424 23:57:15.393369 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.394710 kubelet[3462]: W0424 23:57:15.393380 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.394710 kubelet[3462]: E0424 23:57:15.394007 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.396616 kubelet[3462]: E0424 23:57:15.395023 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.396616 kubelet[3462]: W0424 23:57:15.395042 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.396616 kubelet[3462]: E0424 23:57:15.395057 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.397510 kubelet[3462]: E0424 23:57:15.397495 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.397684 kubelet[3462]: W0424 23:57:15.397614 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.397960 kubelet[3462]: E0424 23:57:15.397934 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:15.400897 kubelet[3462]: E0424 23:57:15.400883 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:15.401050 kubelet[3462]: W0424 23:57:15.400990 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:15.401050 kubelet[3462]: E0424 23:57:15.401023 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.282832 kubelet[3462]: E0424 23:57:16.282792 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.282832 kubelet[3462]: W0424 23:57:16.282826 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.283415 kubelet[3462]: E0424 23:57:16.282848 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.283415 kubelet[3462]: E0424 23:57:16.283116 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.283415 kubelet[3462]: W0424 23:57:16.283129 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.283415 kubelet[3462]: E0424 23:57:16.283142 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.283415 kubelet[3462]: E0424 23:57:16.283359 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.283415 kubelet[3462]: W0424 23:57:16.283382 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.283415 kubelet[3462]: E0424 23:57:16.283396 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.285856 kubelet[3462]: E0424 23:57:16.283641 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.285856 kubelet[3462]: W0424 23:57:16.283653 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.285856 kubelet[3462]: E0424 23:57:16.283665 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.285856 kubelet[3462]: E0424 23:57:16.283898 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.285856 kubelet[3462]: W0424 23:57:16.283914 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.285856 kubelet[3462]: E0424 23:57:16.283926 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.285856 kubelet[3462]: E0424 23:57:16.284136 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.285856 kubelet[3462]: W0424 23:57:16.284147 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.285856 kubelet[3462]: E0424 23:57:16.284170 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.285856 kubelet[3462]: E0424 23:57:16.284381 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286329 kubelet[3462]: W0424 23:57:16.284391 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286329 kubelet[3462]: E0424 23:57:16.284416 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286329 kubelet[3462]: E0424 23:57:16.284651 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286329 kubelet[3462]: W0424 23:57:16.284662 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286329 kubelet[3462]: E0424 23:57:16.284675 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286329 kubelet[3462]: E0424 23:57:16.284928 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286329 kubelet[3462]: W0424 23:57:16.284939 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286329 kubelet[3462]: E0424 23:57:16.284951 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286329 kubelet[3462]: E0424 23:57:16.285174 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286329 kubelet[3462]: W0424 23:57:16.285185 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286693 kubelet[3462]: E0424 23:57:16.285198 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286693 kubelet[3462]: E0424 23:57:16.285436 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286693 kubelet[3462]: W0424 23:57:16.285470 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286693 kubelet[3462]: E0424 23:57:16.285485 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286693 kubelet[3462]: E0424 23:57:16.285722 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286693 kubelet[3462]: W0424 23:57:16.285733 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286693 kubelet[3462]: E0424 23:57:16.285762 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286693 kubelet[3462]: E0424 23:57:16.286073 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286693 kubelet[3462]: W0424 23:57:16.286084 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286693 kubelet[3462]: E0424 23:57:16.286097 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286998 kubelet[3462]: E0424 23:57:16.286289 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286998 kubelet[3462]: W0424 23:57:16.286298 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286998 kubelet[3462]: E0424 23:57:16.286325 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.286998 kubelet[3462]: E0424 23:57:16.286522 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.286998 kubelet[3462]: W0424 23:57:16.286534 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.286998 kubelet[3462]: E0424 23:57:16.286545 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.296993 kubelet[3462]: E0424 23:57:16.296973 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.296993 kubelet[3462]: W0424 23:57:16.296990 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.297216 kubelet[3462]: E0424 23:57:16.297019 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.297331 kubelet[3462]: E0424 23:57:16.297306 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.297331 kubelet[3462]: W0424 23:57:16.297320 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.297543 kubelet[3462]: E0424 23:57:16.297349 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.300004 kubelet[3462]: E0424 23:57:16.299877 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.300004 kubelet[3462]: W0424 23:57:16.299893 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.300004 kubelet[3462]: E0424 23:57:16.299908 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.300395 kubelet[3462]: E0424 23:57:16.300254 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.300395 kubelet[3462]: W0424 23:57:16.300269 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.300395 kubelet[3462]: E0424 23:57:16.300282 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.300849 kubelet[3462]: E0424 23:57:16.300720 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.300849 kubelet[3462]: W0424 23:57:16.300734 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.300849 kubelet[3462]: E0424 23:57:16.300764 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.301665 kubelet[3462]: E0424 23:57:16.301454 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.301665 kubelet[3462]: W0424 23:57:16.301469 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.301665 kubelet[3462]: E0424 23:57:16.301483 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.302137 kubelet[3462]: E0424 23:57:16.301926 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.302137 kubelet[3462]: W0424 23:57:16.301940 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.302137 kubelet[3462]: E0424 23:57:16.301954 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.302539 kubelet[3462]: E0424 23:57:16.302349 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.302539 kubelet[3462]: W0424 23:57:16.302362 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.302539 kubelet[3462]: E0424 23:57:16.302375 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.302854 kubelet[3462]: E0424 23:57:16.302721 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.302854 kubelet[3462]: W0424 23:57:16.302735 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.302854 kubelet[3462]: E0424 23:57:16.302777 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.303337 kubelet[3462]: E0424 23:57:16.303131 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.303337 kubelet[3462]: W0424 23:57:16.303144 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.303337 kubelet[3462]: E0424 23:57:16.303156 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.303771 kubelet[3462]: E0424 23:57:16.303525 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.303771 kubelet[3462]: W0424 23:57:16.303539 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.303771 kubelet[3462]: E0424 23:57:16.303552 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.304342 kubelet[3462]: E0424 23:57:16.304293 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.304342 kubelet[3462]: W0424 23:57:16.304307 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.304342 kubelet[3462]: E0424 23:57:16.304321 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.304883 kubelet[3462]: E0424 23:57:16.304768 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.304883 kubelet[3462]: W0424 23:57:16.304783 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.304883 kubelet[3462]: E0424 23:57:16.304795 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.305491 kubelet[3462]: E0424 23:57:16.305466 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.305594 kubelet[3462]: W0424 23:57:16.305481 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.305648 kubelet[3462]: E0424 23:57:16.305597 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.306648 kubelet[3462]: E0424 23:57:16.306628 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.306648 kubelet[3462]: W0424 23:57:16.306645 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.306852 kubelet[3462]: E0424 23:57:16.306660 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.314208 kubelet[3462]: E0424 23:57:16.314018 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.314208 kubelet[3462]: W0424 23:57:16.314036 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.314208 kubelet[3462]: E0424 23:57:16.314050 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.316725 kubelet[3462]: E0424 23:57:16.316105 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.316725 kubelet[3462]: W0424 23:57:16.316133 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.316725 kubelet[3462]: E0424 23:57:16.316157 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.320665 kubelet[3462]: E0424 23:57:16.320249 3462 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:57:16.320665 kubelet[3462]: W0424 23:57:16.320266 3462 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:57:16.320665 kubelet[3462]: E0424 23:57:16.320282 3462 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:57:16.433864 containerd[1846]: time="2026-04-24T23:57:16.433669527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:16.436922 containerd[1846]: time="2026-04-24T23:57:16.436553723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:57:16.439476 containerd[1846]: time="2026-04-24T23:57:16.439443218Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:16.444603 containerd[1846]: time="2026-04-24T23:57:16.444351511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:16.445165 containerd[1846]: time="2026-04-24T23:57:16.445128410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.347861406s" Apr 24 23:57:16.445248 containerd[1846]: time="2026-04-24T23:57:16.445170310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:57:16.456789 containerd[1846]: time="2026-04-24T23:57:16.456762192Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:57:16.491833 containerd[1846]: time="2026-04-24T23:57:16.491727737Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fc6a5319c6388fb495de1981cd74448ef3f0f9adac35c2c6e66310b97141196d\"" Apr 24 23:57:16.492456 containerd[1846]: time="2026-04-24T23:57:16.492428036Z" level=info msg="StartContainer for \"fc6a5319c6388fb495de1981cd74448ef3f0f9adac35c2c6e66310b97141196d\"" Apr 24 23:57:16.563210 containerd[1846]: time="2026-04-24T23:57:16.563116026Z" level=info msg="StartContainer for \"fc6a5319c6388fb495de1981cd74448ef3f0f9adac35c2c6e66310b97141196d\" returns successfully" Apr 24 23:57:16.594034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc6a5319c6388fb495de1981cd74448ef3f0f9adac35c2c6e66310b97141196d-rootfs.mount: Deactivated successfully. Apr 24 23:57:17.145229 kubelet[3462]: E0424 23:57:17.145164 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:18.586555 containerd[1846]: time="2026-04-24T23:57:18.586473482Z" level=info msg="shim disconnected" id=fc6a5319c6388fb495de1981cd74448ef3f0f9adac35c2c6e66310b97141196d namespace=k8s.io Apr 24 23:57:18.586555 containerd[1846]: time="2026-04-24T23:57:18.586548582Z" level=warning msg="cleaning up after shim disconnected" id=fc6a5319c6388fb495de1981cd74448ef3f0f9adac35c2c6e66310b97141196d namespace=k8s.io Apr 24 23:57:18.586555 containerd[1846]: time="2026-04-24T23:57:18.586562782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:19.145513 kubelet[3462]: E0424 23:57:19.145446 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:19.284952 containerd[1846]: time="2026-04-24T23:57:19.284426915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:57:21.145414 kubelet[3462]: E0424 23:57:21.145361 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:23.145657 kubelet[3462]: E0424 23:57:23.145599 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:25.145115 kubelet[3462]: E0424 23:57:25.145048 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:27.056261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423822789.mount: Deactivated successfully. Apr 24 23:57:27.088791 containerd[1846]: time="2026-04-24T23:57:27.088724121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:27.091602 containerd[1846]: time="2026-04-24T23:57:27.091446559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:57:27.095350 containerd[1846]: time="2026-04-24T23:57:27.094222498Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:27.098938 containerd[1846]: time="2026-04-24T23:57:27.098154053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:27.098938 containerd[1846]: time="2026-04-24T23:57:27.098802262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.814331245s" Apr 24 23:57:27.098938 containerd[1846]: time="2026-04-24T23:57:27.098837862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:57:27.106932 containerd[1846]: time="2026-04-24T23:57:27.106904875Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:57:27.145128 kubelet[3462]: E0424 23:57:27.145077 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:27.147174 containerd[1846]: time="2026-04-24T23:57:27.147128237Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"1ddb75b98c07908c1037807357ce64ac7556e9bbd7edc1d6de62f84cdac34d41\"" Apr 24 23:57:27.148365 containerd[1846]: time="2026-04-24T23:57:27.148339354Z" level=info msg="StartContainer for \"1ddb75b98c07908c1037807357ce64ac7556e9bbd7edc1d6de62f84cdac34d41\"" Apr 24 23:57:27.215634 containerd[1846]: time="2026-04-24T23:57:27.215593593Z" level=info msg="StartContainer for \"1ddb75b98c07908c1037807357ce64ac7556e9bbd7edc1d6de62f84cdac34d41\" returns successfully" Apr 24 23:57:28.055343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ddb75b98c07908c1037807357ce64ac7556e9bbd7edc1d6de62f84cdac34d41-rootfs.mount: Deactivated successfully. Apr 24 23:57:29.145008 kubelet[3462]: E0424 23:57:29.144922 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:30.549719 containerd[1846]: time="2026-04-24T23:57:30.549646973Z" level=info msg="shim disconnected" id=1ddb75b98c07908c1037807357ce64ac7556e9bbd7edc1d6de62f84cdac34d41 namespace=k8s.io Apr 24 23:57:30.549719 containerd[1846]: time="2026-04-24T23:57:30.549716374Z" level=warning msg="cleaning up after shim disconnected" id=1ddb75b98c07908c1037807357ce64ac7556e9bbd7edc1d6de62f84cdac34d41 namespace=k8s.io Apr 24 23:57:30.549719 containerd[1846]: time="2026-04-24T23:57:30.549727474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:31.144996 kubelet[3462]: E0424 23:57:31.144940 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:31.315736 containerd[1846]: time="2026-04-24T23:57:31.315696376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:57:33.145405 kubelet[3462]: E0424 23:57:33.145360 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:35.145865 kubelet[3462]: E0424 23:57:35.145797 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:35.213041 containerd[1846]: time="2026-04-24T23:57:35.212992817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:35.216305 containerd[1846]: time="2026-04-24T23:57:35.215891856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:57:35.220645 containerd[1846]: time="2026-04-24T23:57:35.220589718Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:35.225307 containerd[1846]: time="2026-04-24T23:57:35.225005977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:35.225837 containerd[1846]: time="2026-04-24T23:57:35.225802587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.910065011s" Apr 24 23:57:35.225928 containerd[1846]: time="2026-04-24T23:57:35.225843388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:57:35.233959 containerd[1846]: time="2026-04-24T23:57:35.233926595Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:57:35.269872 containerd[1846]: time="2026-04-24T23:57:35.269831373Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bb8043254fcea21b92d643403fbb55d826493ec489c7602df42244b68e6e3416\"" Apr 24 23:57:35.271076 containerd[1846]: time="2026-04-24T23:57:35.270520782Z" level=info msg="StartContainer for \"bb8043254fcea21b92d643403fbb55d826493ec489c7602df42244b68e6e3416\"" Apr 24 23:57:35.336826 containerd[1846]: time="2026-04-24T23:57:35.336600161Z" level=info msg="StartContainer for \"bb8043254fcea21b92d643403fbb55d826493ec489c7602df42244b68e6e3416\" returns successfully" Apr 24 23:57:37.021208 containerd[1846]: time="2026-04-24T23:57:37.021145364Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:57:37.048454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb8043254fcea21b92d643403fbb55d826493ec489c7602df42244b68e6e3416-rootfs.mount: Deactivated successfully. Apr 24 23:57:37.063662 kubelet[3462]: I0424 23:57:37.063633 3462 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:57:38.311262 containerd[1846]: time="2026-04-24T23:57:38.311184620Z" level=info msg="shim disconnected" id=bb8043254fcea21b92d643403fbb55d826493ec489c7602df42244b68e6e3416 namespace=k8s.io Apr 24 23:57:38.313811 containerd[1846]: time="2026-04-24T23:57:38.311675027Z" level=warning msg="cleaning up after shim disconnected" id=bb8043254fcea21b92d643403fbb55d826493ec489c7602df42244b68e6e3416 namespace=k8s.io Apr 24 23:57:38.313811 containerd[1846]: time="2026-04-24T23:57:38.311697327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:57:38.331726 containerd[1846]: time="2026-04-24T23:57:38.331236887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vghcg,Uid:8c5fd00b-3814-4bd0-8192-1d2f719f9517,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:38.353801 kubelet[3462]: I0424 23:57:38.353169 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f198955-c9d4-4104-9f87-079239cf8c8a-calico-apiserver-certs\") pod \"calico-apiserver-7b4999c544-clwfk\" (UID: \"5f198955-c9d4-4104-9f87-079239cf8c8a\") " pod="calico-system/calico-apiserver-7b4999c544-clwfk" Apr 24 23:57:38.353801 kubelet[3462]: I0424 23:57:38.353221 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cxt9\" (UniqueName: \"kubernetes.io/projected/7eed60e0-dfe6-44af-9c18-1eee2edda56b-kube-api-access-2cxt9\") pod \"coredns-674b8bbfcf-f5g7p\" (UID: \"7eed60e0-dfe6-44af-9c18-1eee2edda56b\") " pod="kube-system/coredns-674b8bbfcf-f5g7p" Apr 24 23:57:38.353801 kubelet[3462]: I0424 23:57:38.353249 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx72q\" (UniqueName: \"kubernetes.io/projected/73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b-kube-api-access-bx72q\") pod \"calico-apiserver-7b4999c544-rcjl8\" (UID: \"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b\") " pod="calico-system/calico-apiserver-7b4999c544-rcjl8" Apr 24 23:57:38.353801 kubelet[3462]: I0424 23:57:38.353280 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsj7c\" (UniqueName: \"kubernetes.io/projected/bab90d29-b4f6-45e4-a59d-c7270debd2c4-kube-api-access-rsj7c\") pod \"calico-kube-controllers-796d4d88bb-v74px\" (UID: \"bab90d29-b4f6-45e4-a59d-c7270debd2c4\") " pod="calico-system/calico-kube-controllers-796d4d88bb-v74px" Apr 24 23:57:38.353801 kubelet[3462]: I0424 23:57:38.353308 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eed60e0-dfe6-44af-9c18-1eee2edda56b-config-volume\") pod \"coredns-674b8bbfcf-f5g7p\" (UID: \"7eed60e0-dfe6-44af-9c18-1eee2edda56b\") " pod="kube-system/coredns-674b8bbfcf-f5g7p" Apr 24 23:57:38.354579 kubelet[3462]: I0424 23:57:38.353334 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/222e4640-e0ff-4078-9fa0-975f8f1c4ffa-config-volume\") pod \"coredns-674b8bbfcf-jsmlw\" (UID: \"222e4640-e0ff-4078-9fa0-975f8f1c4ffa\") " pod="kube-system/coredns-674b8bbfcf-jsmlw" Apr 24 23:57:38.354579 kubelet[3462]: I0424 23:57:38.353362 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfn6\" (UniqueName: \"kubernetes.io/projected/5f198955-c9d4-4104-9f87-079239cf8c8a-kube-api-access-pwfn6\") pod \"calico-apiserver-7b4999c544-clwfk\" (UID: \"5f198955-c9d4-4104-9f87-079239cf8c8a\") " pod="calico-system/calico-apiserver-7b4999c544-clwfk" Apr 24 23:57:38.354579 kubelet[3462]: I0424 23:57:38.353386 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b-calico-apiserver-certs\") pod \"calico-apiserver-7b4999c544-rcjl8\" (UID: \"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b\") " pod="calico-system/calico-apiserver-7b4999c544-rcjl8" Apr 24 23:57:38.354579 kubelet[3462]: I0424 23:57:38.353423 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab90d29-b4f6-45e4-a59d-c7270debd2c4-tigera-ca-bundle\") pod \"calico-kube-controllers-796d4d88bb-v74px\" (UID: \"bab90d29-b4f6-45e4-a59d-c7270debd2c4\") " pod="calico-system/calico-kube-controllers-796d4d88bb-v74px" Apr 24 23:57:38.354579 kubelet[3462]: I0424 23:57:38.353450 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx47k\" (UniqueName: \"kubernetes.io/projected/222e4640-e0ff-4078-9fa0-975f8f1c4ffa-kube-api-access-sx47k\") pod \"coredns-674b8bbfcf-jsmlw\" (UID: \"222e4640-e0ff-4078-9fa0-975f8f1c4ffa\") " pod="kube-system/coredns-674b8bbfcf-jsmlw" Apr 24 23:57:38.354908 containerd[1846]: time="2026-04-24T23:57:38.354869701Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:57:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:57:38.453735 kubelet[3462]: I0424 23:57:38.453645 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4tj4\" (UniqueName: \"kubernetes.io/projected/447021ce-553a-4bfd-adce-1d04ce9ffca6-kube-api-access-j4tj4\") pod \"whisker-5779477b45-vkzjk\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " pod="calico-system/whisker-5779477b45-vkzjk" Apr 24 23:57:38.453735 kubelet[3462]: I0424 23:57:38.453693 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-ca-bundle\") pod \"whisker-5779477b45-vkzjk\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " pod="calico-system/whisker-5779477b45-vkzjk" Apr 24 23:57:38.453735 kubelet[3462]: I0424 23:57:38.453731 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fab5c65-a5ea-4224-bb5b-3fa0147534b7-config\") pod \"goldmane-5b85766d88-br5gx\" (UID: \"0fab5c65-a5ea-4224-bb5b-3fa0147534b7\") " pod="calico-system/goldmane-5b85766d88-br5gx" Apr 24 23:57:38.454530 kubelet[3462]: I0424 23:57:38.454490 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp55\" (UniqueName: \"kubernetes.io/projected/0fab5c65-a5ea-4224-bb5b-3fa0147534b7-kube-api-access-nvp55\") pod \"goldmane-5b85766d88-br5gx\" (UID: \"0fab5c65-a5ea-4224-bb5b-3fa0147534b7\") " pod="calico-system/goldmane-5b85766d88-br5gx" Apr 24 23:57:38.454836 kubelet[3462]: I0424 23:57:38.454620 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fab5c65-a5ea-4224-bb5b-3fa0147534b7-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-br5gx\" (UID: \"0fab5c65-a5ea-4224-bb5b-3fa0147534b7\") " pod="calico-system/goldmane-5b85766d88-br5gx" Apr 24 23:57:38.454836 kubelet[3462]: I0424 23:57:38.454649 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0fab5c65-a5ea-4224-bb5b-3fa0147534b7-goldmane-key-pair\") pod \"goldmane-5b85766d88-br5gx\" (UID: \"0fab5c65-a5ea-4224-bb5b-3fa0147534b7\") " pod="calico-system/goldmane-5b85766d88-br5gx" Apr 24 23:57:38.454836 kubelet[3462]: I0424 23:57:38.454687 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-nginx-config\") pod \"whisker-5779477b45-vkzjk\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " pod="calico-system/whisker-5779477b45-vkzjk" Apr 24 23:57:38.454836 kubelet[3462]: I0424 23:57:38.454712 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-backend-key-pair\") pod \"whisker-5779477b45-vkzjk\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " pod="calico-system/whisker-5779477b45-vkzjk" Apr 24 23:57:38.500833 containerd[1846]: time="2026-04-24T23:57:38.499595626Z" level=error msg="Failed to destroy network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.500833 containerd[1846]: time="2026-04-24T23:57:38.499992231Z" level=error msg="encountered an error cleaning up failed sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.500833 containerd[1846]: time="2026-04-24T23:57:38.500059632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vghcg,Uid:8c5fd00b-3814-4bd0-8192-1d2f719f9517,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.502029 kubelet[3462]: E0424 23:57:38.501984 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.502227 kubelet[3462]: E0424 23:57:38.502205 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vghcg" Apr 24 23:57:38.502353 kubelet[3462]: E0424 23:57:38.502332 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vghcg" Apr 24 23:57:38.502506 kubelet[3462]: E0424 23:57:38.502473 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vghcg_calico-system(8c5fd00b-3814-4bd0-8192-1d2f719f9517)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vghcg_calico-system(8c5fd00b-3814-4bd0-8192-1d2f719f9517)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:38.503180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382-shm.mount: Deactivated successfully. Apr 24 23:57:38.609450 containerd[1846]: time="2026-04-24T23:57:38.609401686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f5g7p,Uid:7eed60e0-dfe6-44af-9c18-1eee2edda56b,Namespace:kube-system,Attempt:0,}" Apr 24 23:57:38.626165 containerd[1846]: time="2026-04-24T23:57:38.626120409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-clwfk,Uid:5f198955-c9d4-4104-9f87-079239cf8c8a,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:38.638770 containerd[1846]: time="2026-04-24T23:57:38.638297671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jsmlw,Uid:222e4640-e0ff-4078-9fa0-975f8f1c4ffa,Namespace:kube-system,Attempt:0,}" Apr 24 23:57:38.661480 containerd[1846]: time="2026-04-24T23:57:38.661446279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796d4d88bb-v74px,Uid:bab90d29-b4f6-45e4-a59d-c7270debd2c4,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:38.663440 containerd[1846]: time="2026-04-24T23:57:38.663409705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-br5gx,Uid:0fab5c65-a5ea-4224-bb5b-3fa0147534b7,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:38.669139 containerd[1846]: time="2026-04-24T23:57:38.669049780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5779477b45-vkzjk,Uid:447021ce-553a-4bfd-adce-1d04ce9ffca6,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:38.688232 containerd[1846]: time="2026-04-24T23:57:38.688201734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-rcjl8,Uid:73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:38.763582 containerd[1846]: time="2026-04-24T23:57:38.763488836Z" level=error msg="Failed to destroy network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.764723 containerd[1846]: time="2026-04-24T23:57:38.764223145Z" level=error msg="encountered an error cleaning up failed sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.764723 containerd[1846]: time="2026-04-24T23:57:38.764286446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f5g7p,Uid:7eed60e0-dfe6-44af-9c18-1eee2edda56b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.766908 kubelet[3462]: E0424 23:57:38.764499 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.766908 kubelet[3462]: E0424 23:57:38.764566 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f5g7p" Apr 24 23:57:38.766908 kubelet[3462]: E0424 23:57:38.764599 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f5g7p" Apr 24 23:57:38.767059 kubelet[3462]: E0424 23:57:38.764674 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f5g7p_kube-system(7eed60e0-dfe6-44af-9c18-1eee2edda56b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f5g7p_kube-system(7eed60e0-dfe6-44af-9c18-1eee2edda56b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f5g7p" podUID="7eed60e0-dfe6-44af-9c18-1eee2edda56b" Apr 24 23:57:38.807992 containerd[1846]: time="2026-04-24T23:57:38.807550122Z" level=error msg="Failed to destroy network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.808139 containerd[1846]: time="2026-04-24T23:57:38.808077129Z" level=error msg="encountered an error cleaning up failed sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.808189 containerd[1846]: time="2026-04-24T23:57:38.808127929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-clwfk,Uid:5f198955-c9d4-4104-9f87-079239cf8c8a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.808587 kubelet[3462]: E0424 23:57:38.808366 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.808587 kubelet[3462]: E0424 23:57:38.808438 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4999c544-clwfk" Apr 24 23:57:38.808587 kubelet[3462]: E0424 23:57:38.808469 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4999c544-clwfk" Apr 24 23:57:38.808811 kubelet[3462]: E0424 23:57:38.808536 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b4999c544-clwfk_calico-system(5f198955-c9d4-4104-9f87-079239cf8c8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b4999c544-clwfk_calico-system(5f198955-c9d4-4104-9f87-079239cf8c8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b4999c544-clwfk" podUID="5f198955-c9d4-4104-9f87-079239cf8c8a" Apr 24 23:57:38.880255 containerd[1846]: time="2026-04-24T23:57:38.879592980Z" level=error msg="Failed to destroy network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.880255 containerd[1846]: time="2026-04-24T23:57:38.879987785Z" level=error msg="encountered an error cleaning up failed sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.880255 containerd[1846]: time="2026-04-24T23:57:38.880045486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jsmlw,Uid:222e4640-e0ff-4078-9fa0-975f8f1c4ffa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.880476 kubelet[3462]: E0424 23:57:38.880299 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.880476 kubelet[3462]: E0424 23:57:38.880363 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jsmlw" Apr 24 23:57:38.880476 kubelet[3462]: E0424 23:57:38.880390 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jsmlw" Apr 24 23:57:38.880613 kubelet[3462]: E0424 23:57:38.880450 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jsmlw_kube-system(222e4640-e0ff-4078-9fa0-975f8f1c4ffa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jsmlw_kube-system(222e4640-e0ff-4078-9fa0-975f8f1c4ffa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jsmlw" podUID="222e4640-e0ff-4078-9fa0-975f8f1c4ffa" Apr 24 23:57:38.931243 containerd[1846]: time="2026-04-24T23:57:38.930932063Z" level=error msg="Failed to destroy network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.933078 containerd[1846]: time="2026-04-24T23:57:38.931968676Z" level=error msg="encountered an error cleaning up failed sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.933078 containerd[1846]: time="2026-04-24T23:57:38.932033977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-br5gx,Uid:0fab5c65-a5ea-4224-bb5b-3fa0147534b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.933285 kubelet[3462]: E0424 23:57:38.932298 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.933285 kubelet[3462]: E0424 23:57:38.932372 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-br5gx" Apr 24 23:57:38.933285 kubelet[3462]: E0424 23:57:38.932399 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-br5gx" Apr 24 23:57:38.933440 kubelet[3462]: E0424 23:57:38.932464 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-br5gx_calico-system(0fab5c65-a5ea-4224-bb5b-3fa0147534b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-br5gx_calico-system(0fab5c65-a5ea-4224-bb5b-3fa0147534b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-br5gx" podUID="0fab5c65-a5ea-4224-bb5b-3fa0147534b7" Apr 24 23:57:38.960658 containerd[1846]: time="2026-04-24T23:57:38.960493456Z" level=error msg="Failed to destroy network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.961180 containerd[1846]: time="2026-04-24T23:57:38.961132464Z" level=error msg="encountered an error cleaning up failed sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.961371 containerd[1846]: time="2026-04-24T23:57:38.961283866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796d4d88bb-v74px,Uid:bab90d29-b4f6-45e4-a59d-c7270debd2c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.961775 kubelet[3462]: E0424 23:57:38.961720 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.962759 kubelet[3462]: E0424 23:57:38.961974 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796d4d88bb-v74px" Apr 24 23:57:38.962759 kubelet[3462]: E0424 23:57:38.962020 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-796d4d88bb-v74px" Apr 24 23:57:38.962759 kubelet[3462]: E0424 23:57:38.962218 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-796d4d88bb-v74px_calico-system(bab90d29-b4f6-45e4-a59d-c7270debd2c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-796d4d88bb-v74px_calico-system(bab90d29-b4f6-45e4-a59d-c7270debd2c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-796d4d88bb-v74px" podUID="bab90d29-b4f6-45e4-a59d-c7270debd2c4" Apr 24 23:57:38.986224 containerd[1846]: time="2026-04-24T23:57:38.986181697Z" level=error msg="Failed to destroy network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.986542 containerd[1846]: time="2026-04-24T23:57:38.986500402Z" level=error msg="encountered an error cleaning up failed sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.986627 containerd[1846]: time="2026-04-24T23:57:38.986568502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-rcjl8,Uid:73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.987861 kubelet[3462]: E0424 23:57:38.986796 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.987861 kubelet[3462]: E0424 23:57:38.986851 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4999c544-rcjl8" Apr 24 23:57:38.987861 kubelet[3462]: E0424 23:57:38.986878 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4999c544-rcjl8" Apr 24 23:57:38.988023 kubelet[3462]: E0424 23:57:38.986938 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b4999c544-rcjl8_calico-system(73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b4999c544-rcjl8_calico-system(73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b4999c544-rcjl8" podUID="73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b" Apr 24 23:57:38.988110 containerd[1846]: time="2026-04-24T23:57:38.987881120Z" level=error msg="Failed to destroy network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.988438 containerd[1846]: time="2026-04-24T23:57:38.988406327Z" level=error msg="encountered an error cleaning up failed sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.988533 containerd[1846]: time="2026-04-24T23:57:38.988458428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5779477b45-vkzjk,Uid:447021ce-553a-4bfd-adce-1d04ce9ffca6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.989208 kubelet[3462]: E0424 23:57:38.989159 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:38.989314 kubelet[3462]: E0424 23:57:38.989225 3462 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5779477b45-vkzjk" Apr 24 23:57:38.989314 kubelet[3462]: E0424 23:57:38.989256 3462 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5779477b45-vkzjk" Apr 24 23:57:38.989567 kubelet[3462]: E0424 23:57:38.989321 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5779477b45-vkzjk_calico-system(447021ce-553a-4bfd-adce-1d04ce9ffca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5779477b45-vkzjk_calico-system(447021ce-553a-4bfd-adce-1d04ce9ffca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5779477b45-vkzjk" podUID="447021ce-553a-4bfd-adce-1d04ce9ffca6" Apr 24 23:57:39.340823 kubelet[3462]: I0424 23:57:39.340790 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:57:39.342694 containerd[1846]: time="2026-04-24T23:57:39.341937529Z" level=info msg="StopPodSandbox for \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\"" Apr 24 23:57:39.342694 containerd[1846]: time="2026-04-24T23:57:39.342393335Z" level=info msg="Ensure that sandbox 0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4 in task-service has been cleanup successfully" Apr 24 23:57:39.345615 kubelet[3462]: I0424 23:57:39.345202 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:57:39.346760 containerd[1846]: time="2026-04-24T23:57:39.346563190Z" level=info msg="StopPodSandbox for \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\"" Apr 24 23:57:39.346919 containerd[1846]: time="2026-04-24T23:57:39.346898895Z" level=info msg="Ensure that sandbox 1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43 in task-service has been cleanup successfully" Apr 24 23:57:39.381782 kubelet[3462]: I0424 23:57:39.380925 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:57:39.401127 containerd[1846]: time="2026-04-24T23:57:39.401086815Z" level=info msg="StopPodSandbox for \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\"" Apr 24 23:57:39.401765 containerd[1846]: time="2026-04-24T23:57:39.401295818Z" level=info msg="Ensure that sandbox f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6 in task-service has been cleanup successfully" Apr 24 23:57:39.428389 kubelet[3462]: I0424 23:57:39.425126 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:39.428910 containerd[1846]: time="2026-04-24T23:57:39.428874585Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:57:39.429296 containerd[1846]: time="2026-04-24T23:57:39.429271390Z" level=info msg="StopPodSandbox for \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\"" Apr 24 23:57:39.432002 kubelet[3462]: I0424 23:57:39.431975 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:57:39.433132 containerd[1846]: time="2026-04-24T23:57:39.433105041Z" level=info msg="Ensure that sandbox 6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941 in task-service has been cleanup successfully" Apr 24 23:57:39.433220 containerd[1846]: time="2026-04-24T23:57:39.432801537Z" level=info msg="StopPodSandbox for \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\"" Apr 24 23:57:39.433513 containerd[1846]: time="2026-04-24T23:57:39.433488546Z" level=info msg="Ensure that sandbox 707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a in task-service has been cleanup successfully" Apr 24 23:57:39.442058 kubelet[3462]: I0424 23:57:39.442036 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:57:39.444991 containerd[1846]: time="2026-04-24T23:57:39.444965199Z" level=info msg="StopPodSandbox for \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\"" Apr 24 23:57:39.445914 containerd[1846]: time="2026-04-24T23:57:39.445888211Z" level=info msg="Ensure that sandbox 5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40 in task-service has been cleanup successfully" Apr 24 23:57:39.456576 kubelet[3462]: I0424 23:57:39.456554 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:57:39.459793 containerd[1846]: time="2026-04-24T23:57:39.459729695Z" level=info msg="StopPodSandbox for \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\"" Apr 24 23:57:39.460636 kubelet[3462]: I0424 23:57:39.460614 3462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:57:39.460871 containerd[1846]: time="2026-04-24T23:57:39.460574506Z" level=info msg="Ensure that sandbox e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833 in task-service has been cleanup successfully" Apr 24 23:57:39.463476 containerd[1846]: time="2026-04-24T23:57:39.462689934Z" level=info msg="StopPodSandbox for \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\"" Apr 24 23:57:39.463557 containerd[1846]: time="2026-04-24T23:57:39.463492945Z" level=info msg="Ensure that sandbox 2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382 in task-service has been cleanup successfully" Apr 24 23:57:39.511782 containerd[1846]: time="2026-04-24T23:57:39.511713385Z" level=error msg="StopPodSandbox for \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\" failed" error="failed to destroy network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.512716 kubelet[3462]: E0424 23:57:39.512492 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:57:39.512716 kubelet[3462]: E0424 23:57:39.512574 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4"} Apr 24 23:57:39.512716 kubelet[3462]: E0424 23:57:39.512649 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.513040 kubelet[3462]: E0424 23:57:39.512999 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b4999c544-rcjl8" podUID="73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b" Apr 24 23:57:39.516695 containerd[1846]: time="2026-04-24T23:57:39.516655781Z" level=info msg="CreateContainer within sandbox \"6d0fb01161323ac97d1577ee3d48ade11283c62cfcf983b1ee421457c0c7cea1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a80e0b7b8dc0f6928bd998a5a211ec21a17032ca38247613119881b3fd0ff665\"" Apr 24 23:57:39.517454 containerd[1846]: time="2026-04-24T23:57:39.517429181Z" level=info msg="StartContainer for \"a80e0b7b8dc0f6928bd998a5a211ec21a17032ca38247613119881b3fd0ff665\"" Apr 24 23:57:39.558844 containerd[1846]: time="2026-04-24T23:57:39.558787751Z" level=error msg="StopPodSandbox for \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\" failed" error="failed to destroy network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.559445 kubelet[3462]: E0424 23:57:39.559245 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:57:39.559445 kubelet[3462]: E0424 23:57:39.559313 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a"} Apr 24 23:57:39.559445 kubelet[3462]: E0424 23:57:39.559361 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bab90d29-b4f6-45e4-a59d-c7270debd2c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.559445 kubelet[3462]: E0424 23:57:39.559395 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bab90d29-b4f6-45e4-a59d-c7270debd2c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-796d4d88bb-v74px" podUID="bab90d29-b4f6-45e4-a59d-c7270debd2c4" Apr 24 23:57:39.570769 containerd[1846]: time="2026-04-24T23:57:39.570311843Z" level=error msg="StopPodSandbox for \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\" failed" error="failed to destroy network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.571202 kubelet[3462]: E0424 23:57:39.571042 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:57:39.571202 kubelet[3462]: E0424 23:57:39.571094 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6"} Apr 24 23:57:39.571202 kubelet[3462]: E0424 23:57:39.571135 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7eed60e0-dfe6-44af-9c18-1eee2edda56b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.571202 kubelet[3462]: E0424 23:57:39.571163 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7eed60e0-dfe6-44af-9c18-1eee2edda56b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f5g7p" podUID="7eed60e0-dfe6-44af-9c18-1eee2edda56b" Apr 24 23:57:39.583926 containerd[1846]: time="2026-04-24T23:57:39.583879834Z" level=error msg="StopPodSandbox for \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\" failed" error="failed to destroy network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.584284 kubelet[3462]: E0424 23:57:39.584235 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:57:39.584462 kubelet[3462]: E0424 23:57:39.584437 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382"} Apr 24 23:57:39.584585 kubelet[3462]: E0424 23:57:39.584566 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c5fd00b-3814-4bd0-8192-1d2f719f9517\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.585489 kubelet[3462]: E0424 23:57:39.584754 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c5fd00b-3814-4bd0-8192-1d2f719f9517\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vghcg" podUID="8c5fd00b-3814-4bd0-8192-1d2f719f9517" Apr 24 23:57:39.598183 containerd[1846]: time="2026-04-24T23:57:39.598071724Z" level=error msg="StopPodSandbox for \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\" failed" error="failed to destroy network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.598336 kubelet[3462]: E0424 23:57:39.598297 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:57:39.598415 kubelet[3462]: E0424 23:57:39.598352 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43"} Apr 24 23:57:39.598463 kubelet[3462]: E0424 23:57:39.598410 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0fab5c65-a5ea-4224-bb5b-3fa0147534b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.598463 kubelet[3462]: E0424 23:57:39.598444 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0fab5c65-a5ea-4224-bb5b-3fa0147534b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-br5gx" podUID="0fab5c65-a5ea-4224-bb5b-3fa0147534b7" Apr 24 23:57:39.617287 containerd[1846]: time="2026-04-24T23:57:39.617238810Z" level=error msg="StopPodSandbox for \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\" failed" error="failed to destroy network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.617676 kubelet[3462]: E0424 23:57:39.617513 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:57:39.617676 kubelet[3462]: E0424 23:57:39.617568 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833"} Apr 24 23:57:39.617676 kubelet[3462]: E0424 23:57:39.617607 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f198955-c9d4-4104-9f87-079239cf8c8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.617676 kubelet[3462]: E0424 23:57:39.617639 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f198955-c9d4-4104-9f87-079239cf8c8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b4999c544-clwfk" podUID="5f198955-c9d4-4104-9f87-079239cf8c8a" Apr 24 23:57:39.626264 containerd[1846]: time="2026-04-24T23:57:39.625244105Z" level=error msg="StopPodSandbox for \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\" failed" error="failed to destroy network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.626357 kubelet[3462]: E0424 23:57:39.625683 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:39.626357 kubelet[3462]: E0424 23:57:39.625749 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941"} Apr 24 23:57:39.626357 kubelet[3462]: E0424 23:57:39.625783 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"447021ce-553a-4bfd-adce-1d04ce9ffca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.626357 kubelet[3462]: E0424 23:57:39.625813 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"447021ce-553a-4bfd-adce-1d04ce9ffca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5779477b45-vkzjk" podUID="447021ce-553a-4bfd-adce-1d04ce9ffca6" Apr 24 23:57:39.634483 containerd[1846]: time="2026-04-24T23:57:39.634384998Z" level=error msg="StopPodSandbox for \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\" failed" error="failed to destroy network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:57:39.634591 kubelet[3462]: E0424 23:57:39.634526 3462 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:57:39.634591 kubelet[3462]: E0424 23:57:39.634564 3462 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40"} Apr 24 23:57:39.634683 kubelet[3462]: E0424 23:57:39.634595 3462 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"222e4640-e0ff-4078-9fa0-975f8f1c4ffa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:57:39.634683 kubelet[3462]: E0424 23:57:39.634626 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"222e4640-e0ff-4078-9fa0-975f8f1c4ffa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jsmlw" podUID="222e4640-e0ff-4078-9fa0-975f8f1c4ffa" Apr 24 23:57:39.669332 containerd[1846]: time="2026-04-24T23:57:39.669287273Z" level=info msg="StartContainer for \"a80e0b7b8dc0f6928bd998a5a211ec21a17032ca38247613119881b3fd0ff665\" returns successfully" Apr 24 23:57:40.467084 containerd[1846]: time="2026-04-24T23:57:40.466861610Z" level=info msg="StopPodSandbox for \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\"" Apr 24 23:57:40.502173 kubelet[3462]: I0424 23:57:40.500296 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2p7l8" podStartSLOduration=6.457677692 podStartE2EDuration="38.500273386s" podCreationTimestamp="2026-04-24 23:57:02 +0000 UTC" firstStartedPulling="2026-04-24 23:57:03.184330008 +0000 UTC m=+21.145457516" lastFinishedPulling="2026-04-24 23:57:35.226925702 +0000 UTC m=+53.188053210" observedRunningTime="2026-04-24 23:57:40.498934087 +0000 UTC m=+58.460061695" watchObservedRunningTime="2026-04-24 23:57:40.500273386 +0000 UTC m=+58.461400994" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.548 [INFO][4739] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.548 [INFO][4739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" iface="eth0" netns="/var/run/netns/cni-c31421ea-11b7-75e0-e284-02d7997886e7" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.548 [INFO][4739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" iface="eth0" netns="/var/run/netns/cni-c31421ea-11b7-75e0-e284-02d7997886e7" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.549 [INFO][4739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" iface="eth0" netns="/var/run/netns/cni-c31421ea-11b7-75e0-e284-02d7997886e7" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.549 [INFO][4739] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.549 [INFO][4739] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.578 [INFO][4760] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.578 [INFO][4760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.578 [INFO][4760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.588 [WARNING][4760] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.588 [INFO][4760] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.590 [INFO][4760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:40.598169 containerd[1846]: 2026-04-24 23:57:40.595 [INFO][4739] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:40.603431 containerd[1846]: time="2026-04-24T23:57:40.598309917Z" level=info msg="TearDown network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\" successfully" Apr 24 23:57:40.603431 containerd[1846]: time="2026-04-24T23:57:40.598337317Z" level=info msg="StopPodSandbox for \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\" returns successfully" Apr 24 23:57:40.603090 systemd[1]: run-netns-cni\x2dc31421ea\x2d11b7\x2d75e0\x2de284\x2d02d7997886e7.mount: Deactivated successfully. Apr 24 23:57:40.675320 kubelet[3462]: I0424 23:57:40.675223 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4tj4\" (UniqueName: \"kubernetes.io/projected/447021ce-553a-4bfd-adce-1d04ce9ffca6-kube-api-access-j4tj4\") pod \"447021ce-553a-4bfd-adce-1d04ce9ffca6\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " Apr 24 23:57:40.675628 kubelet[3462]: I0424 23:57:40.675341 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-backend-key-pair\") pod \"447021ce-553a-4bfd-adce-1d04ce9ffca6\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " Apr 24 23:57:40.675628 kubelet[3462]: I0424 23:57:40.675411 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-ca-bundle\") pod \"447021ce-553a-4bfd-adce-1d04ce9ffca6\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " Apr 24 23:57:40.675628 kubelet[3462]: I0424 23:57:40.675484 3462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-nginx-config\") pod \"447021ce-553a-4bfd-adce-1d04ce9ffca6\" (UID: \"447021ce-553a-4bfd-adce-1d04ce9ffca6\") " Apr 24 23:57:40.676214 kubelet[3462]: I0424 23:57:40.676179 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "447021ce-553a-4bfd-adce-1d04ce9ffca6" (UID: "447021ce-553a-4bfd-adce-1d04ce9ffca6"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:57:40.679004 kubelet[3462]: I0424 23:57:40.678853 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "447021ce-553a-4bfd-adce-1d04ce9ffca6" (UID: "447021ce-553a-4bfd-adce-1d04ce9ffca6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:57:40.680497 kubelet[3462]: I0424 23:57:40.680462 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "447021ce-553a-4bfd-adce-1d04ce9ffca6" (UID: "447021ce-553a-4bfd-adce-1d04ce9ffca6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:57:40.684779 systemd[1]: var-lib-kubelet-pods-447021ce\x2d553a\x2d4bfd\x2dadce\x2d1d04ce9ffca6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:57:40.684928 kubelet[3462]: I0424 23:57:40.684903 3462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/447021ce-553a-4bfd-adce-1d04ce9ffca6-kube-api-access-j4tj4" (OuterVolumeSpecName: "kube-api-access-j4tj4") pod "447021ce-553a-4bfd-adce-1d04ce9ffca6" (UID: "447021ce-553a-4bfd-adce-1d04ce9ffca6"). InnerVolumeSpecName "kube-api-access-j4tj4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:57:40.689472 systemd[1]: var-lib-kubelet-pods-447021ce\x2d553a\x2d4bfd\x2dadce\x2d1d04ce9ffca6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj4tj4.mount: Deactivated successfully. Apr 24 23:57:40.776455 kubelet[3462]: I0424 23:57:40.776325 3462 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j4tj4\" (UniqueName: \"kubernetes.io/projected/447021ce-553a-4bfd-adce-1d04ce9ffca6-kube-api-access-j4tj4\") on node \"ci-4081.3.6-n-bfbb2fd0ff\" DevicePath \"\"" Apr 24 23:57:40.776455 kubelet[3462]: I0424 23:57:40.776363 3462 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-bfbb2fd0ff\" DevicePath \"\"" Apr 24 23:57:40.776455 kubelet[3462]: I0424 23:57:40.776378 3462 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-whisker-ca-bundle\") on node \"ci-4081.3.6-n-bfbb2fd0ff\" DevicePath \"\"" Apr 24 23:57:40.776455 kubelet[3462]: I0424 23:57:40.776392 3462 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/447021ce-553a-4bfd-adce-1d04ce9ffca6-nginx-config\") on node \"ci-4081.3.6-n-bfbb2fd0ff\" DevicePath \"\"" Apr 24 23:57:41.414768 kernel: calico-node[4859]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:57:41.684528 kubelet[3462]: I0424 23:57:41.684093 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/67a293fe-5b24-42e1-b29c-97896c569548-nginx-config\") pod \"whisker-7cb595d854-bhdxm\" (UID: \"67a293fe-5b24-42e1-b29c-97896c569548\") " pod="calico-system/whisker-7cb595d854-bhdxm" Apr 24 23:57:41.684528 kubelet[3462]: I0424 23:57:41.684304 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67a293fe-5b24-42e1-b29c-97896c569548-whisker-backend-key-pair\") pod \"whisker-7cb595d854-bhdxm\" (UID: \"67a293fe-5b24-42e1-b29c-97896c569548\") " pod="calico-system/whisker-7cb595d854-bhdxm" Apr 24 23:57:41.684528 kubelet[3462]: I0424 23:57:41.684400 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67a293fe-5b24-42e1-b29c-97896c569548-whisker-ca-bundle\") pod \"whisker-7cb595d854-bhdxm\" (UID: \"67a293fe-5b24-42e1-b29c-97896c569548\") " pod="calico-system/whisker-7cb595d854-bhdxm" Apr 24 23:57:41.684528 kubelet[3462]: I0424 23:57:41.684427 3462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnggs\" (UniqueName: \"kubernetes.io/projected/67a293fe-5b24-42e1-b29c-97896c569548-kube-api-access-nnggs\") pod \"whisker-7cb595d854-bhdxm\" (UID: \"67a293fe-5b24-42e1-b29c-97896c569548\") " pod="calico-system/whisker-7cb595d854-bhdxm" Apr 24 23:57:41.910282 containerd[1846]: time="2026-04-24T23:57:41.909862290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb595d854-bhdxm,Uid:67a293fe-5b24-42e1-b29c-97896c569548,Namespace:calico-system,Attempt:0,}" Apr 24 23:57:42.146829 containerd[1846]: time="2026-04-24T23:57:42.146783122Z" level=info msg="StopPodSandbox for \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\"" Apr 24 23:57:42.147996 kubelet[3462]: I0424 23:57:42.147767 3462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="447021ce-553a-4bfd-adce-1d04ce9ffca6" path="/var/lib/kubelet/pods/447021ce-553a-4bfd-adce-1d04ce9ffca6/volumes" Apr 24 23:57:42.176610 systemd-networkd[1416]: calibfd39758bb3: Link UP Apr 24 23:57:42.191820 systemd-networkd[1416]: calibfd39758bb3: Gained carrier Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.012 [INFO][4918] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0 whisker-7cb595d854- calico-system 67a293fe-5b24-42e1-b29c-97896c569548 987 0 2026-04-24 23:57:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cb595d854 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff whisker-7cb595d854-bhdxm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibfd39758bb3 [] [] }} ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.012 [INFO][4918] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.044 [INFO][4938] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" HandleID="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.057 [INFO][4938] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" HandleID="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efe60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"whisker-7cb595d854-bhdxm", "timestamp":"2026-04-24 23:57:42.044734895 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f6000)} Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.057 [INFO][4938] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.057 [INFO][4938] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.057 [INFO][4938] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.060 [INFO][4938] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.066 [INFO][4938] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.071 [INFO][4938] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.074 [INFO][4938] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.077 [INFO][4938] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.077 [INFO][4938] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.078 [INFO][4938] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544 Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.085 [INFO][4938] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.099 [INFO][4938] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.1/26] block=192.168.26.0/26 handle="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.100 [INFO][4938] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.1/26] handle="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.101 [INFO][4938] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:42.241432 containerd[1846]: 2026-04-24 23:57:42.101 [INFO][4938] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.1/26] IPv6=[] ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" HandleID="k8s-pod-network.e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" Apr 24 23:57:42.242401 containerd[1846]: 2026-04-24 23:57:42.110 [INFO][4918] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0", GenerateName:"whisker-7cb595d854-", Namespace:"calico-system", SelfLink:"", UID:"67a293fe-5b24-42e1-b29c-97896c569548", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cb595d854", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"whisker-7cb595d854-bhdxm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibfd39758bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:42.242401 containerd[1846]: 2026-04-24 23:57:42.110 [INFO][4918] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.1/32] ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" Apr 24 23:57:42.242401 containerd[1846]: 2026-04-24 23:57:42.110 [INFO][4918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfd39758bb3 ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" Apr 24 23:57:42.242401 containerd[1846]: 2026-04-24 23:57:42.201 [INFO][4918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" Apr 24 23:57:42.242401 containerd[1846]: 2026-04-24 23:57:42.206 [INFO][4918] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0", GenerateName:"whisker-7cb595d854-", Namespace:"calico-system", SelfLink:"", UID:"67a293fe-5b24-42e1-b29c-97896c569548", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cb595d854", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544", Pod:"whisker-7cb595d854-bhdxm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.26.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibfd39758bb3", MAC:"ce:2c:02:63:f7:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:42.242401 containerd[1846]: 2026-04-24 23:57:42.235 [INFO][4918] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544" Namespace="calico-system" Pod="whisker-7cb595d854-bhdxm" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--7cb595d854--bhdxm-eth0" Apr 24 23:57:42.297933 systemd-networkd[1416]: vxlan.calico: Link UP Apr 24 23:57:42.298008 systemd-networkd[1416]: vxlan.calico: Gained carrier Apr 24 23:57:42.342809 containerd[1846]: time="2026-04-24T23:57:42.319333901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:42.342809 containerd[1846]: time="2026-04-24T23:57:42.319388701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:42.342809 containerd[1846]: time="2026-04-24T23:57:42.319424500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:42.342809 containerd[1846]: time="2026-04-24T23:57:42.319541700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.419 [WARNING][4981] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.420 [INFO][4981] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.420 [INFO][4981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" iface="eth0" netns="" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.420 [INFO][4981] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.420 [INFO][4981] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.459 [INFO][5032] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.459 [INFO][5032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.459 [INFO][5032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.469 [WARNING][5032] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.469 [INFO][5032] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.471 [INFO][5032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:42.475193 containerd[1846]: 2026-04-24 23:57:42.473 [INFO][4981] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.476655 containerd[1846]: time="2026-04-24T23:57:42.475798690Z" level=info msg="TearDown network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\" successfully" Apr 24 23:57:42.476655 containerd[1846]: time="2026-04-24T23:57:42.475829190Z" level=info msg="StopPodSandbox for \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\" returns successfully" Apr 24 23:57:42.476655 containerd[1846]: time="2026-04-24T23:57:42.476347990Z" level=info msg="RemovePodSandbox for \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\"" Apr 24 23:57:42.476655 containerd[1846]: time="2026-04-24T23:57:42.476378790Z" level=info msg="Forcibly stopping sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\"" Apr 24 23:57:42.560594 containerd[1846]: time="2026-04-24T23:57:42.560341130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cb595d854-bhdxm,Uid:67a293fe-5b24-42e1-b29c-97896c569548,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544\"" Apr 24 23:57:42.566368 containerd[1846]: time="2026-04-24T23:57:42.566128426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.603 [WARNING][5047] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.603 [INFO][5047] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.603 [INFO][5047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" iface="eth0" netns="" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.603 [INFO][5047] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.603 [INFO][5047] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.635 [INFO][5061] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.635 [INFO][5061] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.637 [INFO][5061] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.647 [WARNING][5061] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.647 [INFO][5061] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" HandleID="k8s-pod-network.6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-whisker--5779477b45--vkzjk-eth0" Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.648 [INFO][5061] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:42.652005 containerd[1846]: 2026-04-24 23:57:42.650 [INFO][5047] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941" Apr 24 23:57:42.653768 containerd[1846]: time="2026-04-24T23:57:42.652542965Z" level=info msg="TearDown network for sandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\" successfully" Apr 24 23:57:42.659396 containerd[1846]: time="2026-04-24T23:57:42.659348360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:57:42.659593 containerd[1846]: time="2026-04-24T23:57:42.659570060Z" level=info msg="RemovePodSandbox \"6c3494293c7bf7b80251d6ad5092c7a24bd17d6f0948b206120706da263ad941\" returns successfully" Apr 24 23:57:43.570442 systemd-networkd[1416]: vxlan.calico: Gained IPv6LL Apr 24 23:57:43.633902 systemd-networkd[1416]: calibfd39758bb3: Gained IPv6LL Apr 24 23:57:43.988712 containerd[1846]: time="2026-04-24T23:57:43.988654321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:43.991032 containerd[1846]: time="2026-04-24T23:57:43.990865519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:57:43.993804 containerd[1846]: time="2026-04-24T23:57:43.993767817Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:43.997869 containerd[1846]: time="2026-04-24T23:57:43.997626114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:43.998392 containerd[1846]: time="2026-04-24T23:57:43.998336114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.432169688s" Apr 24 23:57:43.998392 containerd[1846]: time="2026-04-24T23:57:43.998377914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:57:44.006352 containerd[1846]: time="2026-04-24T23:57:44.006312608Z" level=info msg="CreateContainer within sandbox \"e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:57:44.035038 containerd[1846]: time="2026-04-24T23:57:44.034998688Z" level=info msg="CreateContainer within sandbox \"e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ecea10e462da2368df981617f479aea0030028f090c9b811fab8837f6965a345\"" Apr 24 23:57:44.036191 containerd[1846]: time="2026-04-24T23:57:44.035486488Z" level=info msg="StartContainer for \"ecea10e462da2368df981617f479aea0030028f090c9b811fab8837f6965a345\"" Apr 24 23:57:44.121510 containerd[1846]: time="2026-04-24T23:57:44.121462627Z" level=info msg="StartContainer for \"ecea10e462da2368df981617f479aea0030028f090c9b811fab8837f6965a345\" returns successfully" Apr 24 23:57:44.123728 containerd[1846]: time="2026-04-24T23:57:44.123688625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:57:46.136099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672463511.mount: Deactivated successfully. Apr 24 23:57:46.182155 containerd[1846]: time="2026-04-24T23:57:46.182098763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:46.184807 containerd[1846]: time="2026-04-24T23:57:46.184622401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:57:46.187220 containerd[1846]: time="2026-04-24T23:57:46.187159640Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:46.192204 containerd[1846]: time="2026-04-24T23:57:46.191400905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:46.192204 containerd[1846]: time="2026-04-24T23:57:46.192076815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.06833839s" Apr 24 23:57:46.192204 containerd[1846]: time="2026-04-24T23:57:46.192111216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:57:46.199418 containerd[1846]: time="2026-04-24T23:57:46.199391227Z" level=info msg="CreateContainer within sandbox \"e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:57:46.229295 containerd[1846]: time="2026-04-24T23:57:46.229258884Z" level=info msg="CreateContainer within sandbox \"e7b9ee6405c0c1e15c92a6861b57c48618b4869983b11e3e15408c7bd8f2b544\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"92a687441813df6935031fbae115a86f9f2cec97d8aa4b35cd3d48d07b12c57b\"" Apr 24 23:57:46.230155 containerd[1846]: time="2026-04-24T23:57:46.230122298Z" level=info msg="StartContainer for \"92a687441813df6935031fbae115a86f9f2cec97d8aa4b35cd3d48d07b12c57b\"" Apr 24 23:57:46.307150 containerd[1846]: time="2026-04-24T23:57:46.307102976Z" level=info msg="StartContainer for \"92a687441813df6935031fbae115a86f9f2cec97d8aa4b35cd3d48d07b12c57b\" returns successfully" Apr 24 23:57:46.501260 kubelet[3462]: I0424 23:57:46.500265 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cb595d854-bhdxm" podStartSLOduration=1.86962223 podStartE2EDuration="5.500244931s" podCreationTimestamp="2026-04-24 23:57:41 +0000 UTC" firstStartedPulling="2026-04-24 23:57:42.562391929 +0000 UTC m=+60.523519437" lastFinishedPulling="2026-04-24 23:57:46.19301453 +0000 UTC m=+64.154142138" observedRunningTime="2026-04-24 23:57:46.498732508 +0000 UTC m=+64.459860016" watchObservedRunningTime="2026-04-24 23:57:46.500244931 +0000 UTC m=+64.461372439" Apr 24 23:57:46.850190 systemd[1]: run-containerd-runc-k8s.io-92a687441813df6935031fbae115a86f9f2cec97d8aa4b35cd3d48d07b12c57b-runc.3m25QV.mount: Deactivated successfully. Apr 24 23:57:51.146549 containerd[1846]: time="2026-04-24T23:57:51.146135039Z" level=info msg="StopPodSandbox for \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\"" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.203 [INFO][5246] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.203 [INFO][5246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" iface="eth0" netns="/var/run/netns/cni-f01c7b13-9efc-ad83-b11a-f8f0b233bdb5" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.204 [INFO][5246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" iface="eth0" netns="/var/run/netns/cni-f01c7b13-9efc-ad83-b11a-f8f0b233bdb5" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.204 [INFO][5246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" iface="eth0" netns="/var/run/netns/cni-f01c7b13-9efc-ad83-b11a-f8f0b233bdb5" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.204 [INFO][5246] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.204 [INFO][5246] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.224 [INFO][5254] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.224 [INFO][5254] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.224 [INFO][5254] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.230 [WARNING][5254] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.230 [INFO][5254] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.231 [INFO][5254] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:51.234280 containerd[1846]: 2026-04-24 23:57:51.233 [INFO][5246] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:57:51.236113 containerd[1846]: time="2026-04-24T23:57:51.236060856Z" level=info msg="TearDown network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\" successfully" Apr 24 23:57:51.236113 containerd[1846]: time="2026-04-24T23:57:51.236099456Z" level=info msg="StopPodSandbox for \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\" returns successfully" Apr 24 23:57:51.238427 containerd[1846]: time="2026-04-24T23:57:51.238388690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jsmlw,Uid:222e4640-e0ff-4078-9fa0-975f8f1c4ffa,Namespace:kube-system,Attempt:1,}" Apr 24 23:57:51.240923 systemd[1]: run-netns-cni\x2df01c7b13\x2d9efc\x2dad83\x2db11a\x2df8f0b233bdb5.mount: Deactivated successfully. Apr 24 23:57:51.377997 systemd-networkd[1416]: calia2f76222e51: Link UP Apr 24 23:57:51.378207 systemd-networkd[1416]: calia2f76222e51: Gained carrier Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.310 [INFO][5261] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0 coredns-674b8bbfcf- kube-system 222e4640-e0ff-4078-9fa0-975f8f1c4ffa 1029 0 2026-04-24 23:56:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff coredns-674b8bbfcf-jsmlw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2f76222e51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.310 [INFO][5261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.336 [INFO][5272] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" HandleID="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.346 [INFO][5272] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" HandleID="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"coredns-674b8bbfcf-jsmlw", "timestamp":"2026-04-24 23:57:51.336598629 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e11e0)} Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.346 [INFO][5272] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.346 [INFO][5272] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.346 [INFO][5272] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.348 [INFO][5272] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.351 [INFO][5272] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.355 [INFO][5272] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.357 [INFO][5272] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.359 [INFO][5272] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.359 [INFO][5272] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.360 [INFO][5272] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.366 [INFO][5272] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.372 [INFO][5272] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.2/26] block=192.168.26.0/26 handle="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.372 [INFO][5272] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.2/26] handle="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.372 [INFO][5272] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:51.400279 containerd[1846]: 2026-04-24 23:57:51.372 [INFO][5272] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.2/26] IPv6=[] ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" HandleID="k8s-pod-network.614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.401940 containerd[1846]: 2026-04-24 23:57:51.374 [INFO][5261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"222e4640-e0ff-4078-9fa0-975f8f1c4ffa", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"coredns-674b8bbfcf-jsmlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2f76222e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:51.401940 containerd[1846]: 2026-04-24 23:57:51.374 [INFO][5261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.2/32] ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.401940 containerd[1846]: 2026-04-24 23:57:51.374 [INFO][5261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2f76222e51 ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.401940 containerd[1846]: 2026-04-24 23:57:51.377 [INFO][5261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.401940 containerd[1846]: 2026-04-24 23:57:51.377 [INFO][5261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"222e4640-e0ff-4078-9fa0-975f8f1c4ffa", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e", Pod:"coredns-674b8bbfcf-jsmlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2f76222e51", MAC:"96:ab:ef:f2:18:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:51.401940 containerd[1846]: 2026-04-24 23:57:51.396 [INFO][5261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jsmlw" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:57:51.427067 containerd[1846]: time="2026-04-24T23:57:51.426971252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:51.427067 containerd[1846]: time="2026-04-24T23:57:51.427023553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:51.427067 containerd[1846]: time="2026-04-24T23:57:51.427038253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:51.427401 containerd[1846]: time="2026-04-24T23:57:51.427147155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:51.513219 containerd[1846]: time="2026-04-24T23:57:51.513179815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jsmlw,Uid:222e4640-e0ff-4078-9fa0-975f8f1c4ffa,Namespace:kube-system,Attempt:1,} returns sandbox id \"614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e\"" Apr 24 23:57:51.522776 containerd[1846]: time="2026-04-24T23:57:51.522537752Z" level=info msg="CreateContainer within sandbox \"614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:57:51.552603 containerd[1846]: time="2026-04-24T23:57:51.552555692Z" level=info msg="CreateContainer within sandbox \"614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15954f367bd4a31d87f8c3b2b391033be1c3e94b781c585b7402b1d7894483d5\"" Apr 24 23:57:51.553460 containerd[1846]: time="2026-04-24T23:57:51.553426604Z" level=info msg="StartContainer for \"15954f367bd4a31d87f8c3b2b391033be1c3e94b781c585b7402b1d7894483d5\"" Apr 24 23:57:51.607854 containerd[1846]: time="2026-04-24T23:57:51.607802301Z" level=info msg="StartContainer for \"15954f367bd4a31d87f8c3b2b391033be1c3e94b781c585b7402b1d7894483d5\" returns successfully" Apr 24 23:57:52.149697 containerd[1846]: time="2026-04-24T23:57:52.149103930Z" level=info msg="StopPodSandbox for \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\"" Apr 24 23:57:52.149697 containerd[1846]: time="2026-04-24T23:57:52.149255832Z" level=info msg="StopPodSandbox for \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\"" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.221 [INFO][5398] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.221 [INFO][5398] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" iface="eth0" netns="/var/run/netns/cni-6f34729a-7251-5480-a618-7b330dc3d783" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.223 [INFO][5398] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" iface="eth0" netns="/var/run/netns/cni-6f34729a-7251-5480-a618-7b330dc3d783" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.223 [INFO][5398] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" iface="eth0" netns="/var/run/netns/cni-6f34729a-7251-5480-a618-7b330dc3d783" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.223 [INFO][5398] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.224 [INFO][5398] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.259 [INFO][5417] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.259 [INFO][5417] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.260 [INFO][5417] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.268 [WARNING][5417] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.269 [INFO][5417] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.278 [INFO][5417] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:52.287974 containerd[1846]: 2026-04-24 23:57:52.283 [INFO][5398] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:57:52.288878 containerd[1846]: time="2026-04-24T23:57:52.288721275Z" level=info msg="TearDown network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\" successfully" Apr 24 23:57:52.288878 containerd[1846]: time="2026-04-24T23:57:52.288783376Z" level=info msg="StopPodSandbox for \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\" returns successfully" Apr 24 23:57:52.291829 containerd[1846]: time="2026-04-24T23:57:52.290042194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f5g7p,Uid:7eed60e0-dfe6-44af-9c18-1eee2edda56b,Namespace:kube-system,Attempt:1,}" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.216 [INFO][5399] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.216 [INFO][5399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" iface="eth0" netns="/var/run/netns/cni-9f26a8b9-25f5-c7a7-fce5-531a80900681" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.217 [INFO][5399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" iface="eth0" netns="/var/run/netns/cni-9f26a8b9-25f5-c7a7-fce5-531a80900681" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.220 [INFO][5399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" iface="eth0" netns="/var/run/netns/cni-9f26a8b9-25f5-c7a7-fce5-531a80900681" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.220 [INFO][5399] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.220 [INFO][5399] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.268 [INFO][5412] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.268 [INFO][5412] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.278 [INFO][5412] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.284 [WARNING][5412] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.284 [INFO][5412] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.285 [INFO][5412] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:52.291829 containerd[1846]: 2026-04-24 23:57:52.287 [INFO][5399] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:57:52.292510 containerd[1846]: time="2026-04-24T23:57:52.292045423Z" level=info msg="TearDown network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\" successfully" Apr 24 23:57:52.292510 containerd[1846]: time="2026-04-24T23:57:52.292068824Z" level=info msg="StopPodSandbox for \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\" returns successfully" Apr 24 23:57:52.294922 containerd[1846]: time="2026-04-24T23:57:52.292620132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-clwfk,Uid:5f198955-c9d4-4104-9f87-079239cf8c8a,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:52.295724 systemd[1]: run-netns-cni\x2d6f34729a\x2d7251\x2d5480\x2da618\x2d7b330dc3d783.mount: Deactivated successfully. Apr 24 23:57:52.302343 systemd[1]: run-netns-cni\x2d9f26a8b9\x2d25f5\x2dc7a7\x2dfce5\x2d531a80900681.mount: Deactivated successfully. Apr 24 23:57:52.484281 systemd-networkd[1416]: calid0adbdf9129: Link UP Apr 24 23:57:52.486981 systemd-networkd[1416]: calid0adbdf9129: Gained carrier Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.401 [INFO][5425] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0 coredns-674b8bbfcf- kube-system 7eed60e0-dfe6-44af-9c18-1eee2edda56b 1041 0 2026-04-24 23:56:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff coredns-674b8bbfcf-f5g7p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid0adbdf9129 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.401 [INFO][5425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.441 [INFO][5448] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" HandleID="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.451 [INFO][5448] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" HandleID="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002774c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"coredns-674b8bbfcf-f5g7p", "timestamp":"2026-04-24 23:57:52.441202208 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f91e0)} Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.451 [INFO][5448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.451 [INFO][5448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.451 [INFO][5448] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.453 [INFO][5448] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.457 [INFO][5448] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.463 [INFO][5448] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.464 [INFO][5448] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.466 [INFO][5448] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.466 [INFO][5448] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.467 [INFO][5448] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.472 [INFO][5448] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.478 [INFO][5448] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.3/26] block=192.168.26.0/26 handle="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.478 [INFO][5448] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.3/26] handle="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.478 [INFO][5448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:52.510520 containerd[1846]: 2026-04-24 23:57:52.478 [INFO][5448] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.3/26] IPv6=[] ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" HandleID="k8s-pod-network.4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.516461 containerd[1846]: 2026-04-24 23:57:52.480 [INFO][5425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7eed60e0-dfe6-44af-9c18-1eee2edda56b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"coredns-674b8bbfcf-f5g7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0adbdf9129", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:52.516461 containerd[1846]: 2026-04-24 23:57:52.481 [INFO][5425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.3/32] ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.516461 containerd[1846]: 2026-04-24 23:57:52.481 [INFO][5425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0adbdf9129 ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.516461 containerd[1846]: 2026-04-24 23:57:52.487 [INFO][5425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.516461 containerd[1846]: 2026-04-24 23:57:52.487 [INFO][5425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7eed60e0-dfe6-44af-9c18-1eee2edda56b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe", Pod:"coredns-674b8bbfcf-f5g7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0adbdf9129", MAC:"b2:f6:7f:5b:00:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:52.516461 containerd[1846]: 2026-04-24 23:57:52.501 [INFO][5425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe" Namespace="kube-system" Pod="coredns-674b8bbfcf-f5g7p" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:57:52.559708 kubelet[3462]: I0424 23:57:52.559004 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jsmlw" podStartSLOduration=63.558960833 podStartE2EDuration="1m3.558960833s" podCreationTimestamp="2026-04-24 23:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:57:52.533664462 +0000 UTC m=+70.494791970" watchObservedRunningTime="2026-04-24 23:57:52.558960833 +0000 UTC m=+70.520088341" Apr 24 23:57:52.566340 containerd[1846]: time="2026-04-24T23:57:52.565237025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:52.566340 containerd[1846]: time="2026-04-24T23:57:52.565724232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:52.566340 containerd[1846]: time="2026-04-24T23:57:52.565960536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:52.566340 containerd[1846]: time="2026-04-24T23:57:52.566077737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:52.661304 systemd-networkd[1416]: cali36c2de670ac: Link UP Apr 24 23:57:52.663382 systemd-networkd[1416]: cali36c2de670ac: Gained carrier Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.407 [INFO][5433] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0 calico-apiserver-7b4999c544- calico-system 5f198955-c9d4-4104-9f87-079239cf8c8a 1040 0 2026-04-24 23:57:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b4999c544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff calico-apiserver-7b4999c544-clwfk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali36c2de670ac [] [] }} ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.407 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.442 [INFO][5453] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" HandleID="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.451 [INFO][5453] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" HandleID="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"calico-apiserver-7b4999c544-clwfk", "timestamp":"2026-04-24 23:57:52.442596929 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001874a0)} Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.451 [INFO][5453] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.479 [INFO][5453] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.479 [INFO][5453] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.560 [INFO][5453] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.585 [INFO][5453] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.607 [INFO][5453] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.616 [INFO][5453] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.625 [INFO][5453] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.625 [INFO][5453] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.629 [INFO][5453] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.637 [INFO][5453] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.651 [INFO][5453] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.4/26] block=192.168.26.0/26 handle="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.651 [INFO][5453] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.4/26] handle="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.651 [INFO][5453] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:52.718957 containerd[1846]: 2026-04-24 23:57:52.651 [INFO][5453] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.4/26] IPv6=[] ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" HandleID="k8s-pod-network.2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.723642 containerd[1846]: 2026-04-24 23:57:52.653 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"5f198955-c9d4-4104-9f87-079239cf8c8a", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"calico-apiserver-7b4999c544-clwfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36c2de670ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:52.723642 containerd[1846]: 2026-04-24 23:57:52.653 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.4/32] ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.723642 containerd[1846]: 2026-04-24 23:57:52.653 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36c2de670ac ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.723642 containerd[1846]: 2026-04-24 23:57:52.679 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.723642 containerd[1846]: 2026-04-24 23:57:52.687 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"5f198955-c9d4-4104-9f87-079239cf8c8a", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a", Pod:"calico-apiserver-7b4999c544-clwfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36c2de670ac", MAC:"02:4f:1e:06:dd:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:52.723642 containerd[1846]: 2026-04-24 23:57:52.711 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-clwfk" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:57:52.722142 systemd-networkd[1416]: calia2f76222e51: Gained IPv6LL Apr 24 23:57:52.782362 containerd[1846]: time="2026-04-24T23:57:52.780897684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f5g7p,Uid:7eed60e0-dfe6-44af-9c18-1eee2edda56b,Namespace:kube-system,Attempt:1,} returns sandbox id \"4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe\"" Apr 24 23:57:52.793651 containerd[1846]: time="2026-04-24T23:57:52.793476668Z" level=info msg="CreateContainer within sandbox \"4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:57:52.805973 containerd[1846]: time="2026-04-24T23:57:52.805877550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:52.806120 containerd[1846]: time="2026-04-24T23:57:52.805955951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:52.806120 containerd[1846]: time="2026-04-24T23:57:52.806087153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:52.806281 containerd[1846]: time="2026-04-24T23:57:52.806194454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:52.831447 containerd[1846]: time="2026-04-24T23:57:52.831405324Z" level=info msg="CreateContainer within sandbox \"4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68d3ef3051e9bfe68e0e5e2fb4182cd23c9e51a1d7deaf47366abfb992410be6\"" Apr 24 23:57:52.832448 containerd[1846]: time="2026-04-24T23:57:52.832327237Z" level=info msg="StartContainer for \"68d3ef3051e9bfe68e0e5e2fb4182cd23c9e51a1d7deaf47366abfb992410be6\"" Apr 24 23:57:52.896856 containerd[1846]: time="2026-04-24T23:57:52.896737881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-clwfk,Uid:5f198955-c9d4-4104-9f87-079239cf8c8a,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a\"" Apr 24 23:57:52.898947 containerd[1846]: time="2026-04-24T23:57:52.898713410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:57:52.908159 containerd[1846]: time="2026-04-24T23:57:52.908123747Z" level=info msg="StartContainer for \"68d3ef3051e9bfe68e0e5e2fb4182cd23c9e51a1d7deaf47366abfb992410be6\" returns successfully" Apr 24 23:57:53.146765 containerd[1846]: time="2026-04-24T23:57:53.146052132Z" level=info msg="StopPodSandbox for \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\"" Apr 24 23:57:53.146765 containerd[1846]: time="2026-04-24T23:57:53.146484539Z" level=info msg="StopPodSandbox for \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\"" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.219 [INFO][5646] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" iface="eth0" netns="/var/run/netns/cni-72c42754-6c28-017e-7675-a4dafd29def9" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" iface="eth0" netns="/var/run/netns/cni-72c42754-6c28-017e-7675-a4dafd29def9" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" iface="eth0" netns="/var/run/netns/cni-72c42754-6c28-017e-7675-a4dafd29def9" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5646] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5646] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.251 [INFO][5662] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.251 [INFO][5662] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.251 [INFO][5662] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.261 [WARNING][5662] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.261 [INFO][5662] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.264 [INFO][5662] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.269117 containerd[1846]: 2026-04-24 23:57:53.266 [INFO][5646] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:57:53.270483 containerd[1846]: time="2026-04-24T23:57:53.269968547Z" level=info msg="TearDown network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\" successfully" Apr 24 23:57:53.270686 containerd[1846]: time="2026-04-24T23:57:53.270548756Z" level=info msg="StopPodSandbox for \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\" returns successfully" Apr 24 23:57:53.271996 containerd[1846]: time="2026-04-24T23:57:53.271967177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vghcg,Uid:8c5fd00b-3814-4bd0-8192-1d2f719f9517,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:53.278469 systemd[1]: run-netns-cni\x2d72c42754\x2d6c28\x2d017e\x2d7675\x2da4dafd29def9.mount: Deactivated successfully. Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.217 [INFO][5645] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.218 [INFO][5645] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" iface="eth0" netns="/var/run/netns/cni-e052d8e4-679c-60ee-f2ae-08128da835c1" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.219 [INFO][5645] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" iface="eth0" netns="/var/run/netns/cni-e052d8e4-679c-60ee-f2ae-08128da835c1" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5645] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" iface="eth0" netns="/var/run/netns/cni-e052d8e4-679c-60ee-f2ae-08128da835c1" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5645] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.220 [INFO][5645] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.286 [INFO][5661] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.286 [INFO][5661] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.286 [INFO][5661] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.292 [WARNING][5661] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.292 [INFO][5661] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.293 [INFO][5661] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.305168 containerd[1846]: 2026-04-24 23:57:53.298 [INFO][5645] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:57:53.311036 containerd[1846]: time="2026-04-24T23:57:53.308905818Z" level=info msg="TearDown network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\" successfully" Apr 24 23:57:53.311036 containerd[1846]: time="2026-04-24T23:57:53.308941118Z" level=info msg="StopPodSandbox for \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\" returns successfully" Apr 24 23:57:53.313606 systemd[1]: run-netns-cni\x2de052d8e4\x2d679c\x2d60ee\x2df2ae\x2d08128da835c1.mount: Deactivated successfully. Apr 24 23:57:53.321006 containerd[1846]: time="2026-04-24T23:57:53.320972995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796d4d88bb-v74px,Uid:bab90d29-b4f6-45e4-a59d-c7270debd2c4,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:53.447557 systemd-networkd[1416]: calic48014c72ee: Link UP Apr 24 23:57:53.447839 systemd-networkd[1416]: calic48014c72ee: Gained carrier Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.351 [INFO][5676] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0 csi-node-driver- calico-system 8c5fd00b-3814-4bd0-8192-1d2f719f9517 1064 0 2026-04-24 23:57:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff csi-node-driver-vghcg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic48014c72ee [] [] }} ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.352 [INFO][5676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.392 [INFO][5687] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" HandleID="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.404 [INFO][5687] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" HandleID="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fded0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"csi-node-driver-vghcg", "timestamp":"2026-04-24 23:57:53.392481242 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001866e0)} Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.404 [INFO][5687] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.404 [INFO][5687] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.404 [INFO][5687] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.406 [INFO][5687] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.412 [INFO][5687] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.417 [INFO][5687] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.419 [INFO][5687] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.422 [INFO][5687] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.422 [INFO][5687] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.423 [INFO][5687] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1 Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.430 [INFO][5687] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.439 [INFO][5687] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.5/26] block=192.168.26.0/26 handle="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.439 [INFO][5687] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.5/26] handle="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.440 [INFO][5687] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.471225 containerd[1846]: 2026-04-24 23:57:53.440 [INFO][5687] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.5/26] IPv6=[] ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" HandleID="k8s-pod-network.853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.472934 containerd[1846]: 2026-04-24 23:57:53.442 [INFO][5676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c5fd00b-3814-4bd0-8192-1d2f719f9517", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"csi-node-driver-vghcg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic48014c72ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.472934 containerd[1846]: 2026-04-24 23:57:53.442 [INFO][5676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.5/32] ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.472934 containerd[1846]: 2026-04-24 23:57:53.442 [INFO][5676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic48014c72ee ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.472934 containerd[1846]: 2026-04-24 23:57:53.447 [INFO][5676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.472934 containerd[1846]: 2026-04-24 23:57:53.449 [INFO][5676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c5fd00b-3814-4bd0-8192-1d2f719f9517", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1", Pod:"csi-node-driver-vghcg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic48014c72ee", MAC:"96:a6:90:e8:fd:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.472934 containerd[1846]: 2026-04-24 23:57:53.468 [INFO][5676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1" Namespace="calico-system" Pod="csi-node-driver-vghcg" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:57:53.501689 containerd[1846]: time="2026-04-24T23:57:53.501395537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:53.501689 containerd[1846]: time="2026-04-24T23:57:53.501448138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:53.501689 containerd[1846]: time="2026-04-24T23:57:53.501465438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.501689 containerd[1846]: time="2026-04-24T23:57:53.501554040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.543771 kubelet[3462]: I0424 23:57:53.541490 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f5g7p" podStartSLOduration=64.541469124 podStartE2EDuration="1m4.541469124s" podCreationTimestamp="2026-04-24 23:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:57:53.538096675 +0000 UTC m=+71.499224183" watchObservedRunningTime="2026-04-24 23:57:53.541469124 +0000 UTC m=+71.502596632" Apr 24 23:57:53.607533 systemd-networkd[1416]: cali911c256085b: Link UP Apr 24 23:57:53.610659 systemd-networkd[1416]: cali911c256085b: Gained carrier Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.405 [INFO][5688] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0 calico-kube-controllers-796d4d88bb- calico-system bab90d29-b4f6-45e4-a59d-c7270debd2c4 1063 0 2026-04-24 23:57:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:796d4d88bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff calico-kube-controllers-796d4d88bb-v74px eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali911c256085b [] [] }} ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.405 [INFO][5688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.458 [INFO][5705] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" HandleID="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.472 [INFO][5705] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" HandleID="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"calico-kube-controllers-796d4d88bb-v74px", "timestamp":"2026-04-24 23:57:53.458668511 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e9b80)} Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.472 [INFO][5705] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.472 [INFO][5705] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.472 [INFO][5705] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.508 [INFO][5705] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.517 [INFO][5705] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.540 [INFO][5705] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.550 [INFO][5705] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.558 [INFO][5705] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.559 [INFO][5705] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.567 [INFO][5705] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.581 [INFO][5705] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.593 [INFO][5705] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.6/26] block=192.168.26.0/26 handle="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.593 [INFO][5705] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.6/26] handle="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.594 [INFO][5705] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:53.641621 containerd[1846]: 2026-04-24 23:57:53.594 [INFO][5705] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.6/26] IPv6=[] ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" HandleID="k8s-pod-network.e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.642713 containerd[1846]: 2026-04-24 23:57:53.598 [INFO][5688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0", GenerateName:"calico-kube-controllers-796d4d88bb-", Namespace:"calico-system", SelfLink:"", UID:"bab90d29-b4f6-45e4-a59d-c7270debd2c4", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796d4d88bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"calico-kube-controllers-796d4d88bb-v74px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali911c256085b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.642713 containerd[1846]: 2026-04-24 23:57:53.598 [INFO][5688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.6/32] ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.642713 containerd[1846]: 2026-04-24 23:57:53.598 [INFO][5688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali911c256085b ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.642713 containerd[1846]: 2026-04-24 23:57:53.606 [INFO][5688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.642713 containerd[1846]: 2026-04-24 23:57:53.607 [INFO][5688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0", GenerateName:"calico-kube-controllers-796d4d88bb-", Namespace:"calico-system", SelfLink:"", UID:"bab90d29-b4f6-45e4-a59d-c7270debd2c4", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796d4d88bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f", Pod:"calico-kube-controllers-796d4d88bb-v74px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali911c256085b", MAC:"4e:ea:2e:a5:22:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:53.642713 containerd[1846]: 2026-04-24 23:57:53.628 [INFO][5688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f" Namespace="calico-system" Pod="calico-kube-controllers-796d4d88bb-v74px" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:57:53.660703 containerd[1846]: time="2026-04-24T23:57:53.660591969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vghcg,Uid:8c5fd00b-3814-4bd0-8192-1d2f719f9517,Namespace:calico-system,Attempt:1,} returns sandbox id \"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1\"" Apr 24 23:57:53.717785 containerd[1846]: time="2026-04-24T23:57:53.715165769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:53.717785 containerd[1846]: time="2026-04-24T23:57:53.715226769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:53.717785 containerd[1846]: time="2026-04-24T23:57:53.715247870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.717785 containerd[1846]: time="2026-04-24T23:57:53.715382672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:53.777199 containerd[1846]: time="2026-04-24T23:57:53.777159177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-796d4d88bb-v74px,Uid:bab90d29-b4f6-45e4-a59d-c7270debd2c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f\"" Apr 24 23:57:54.066217 systemd-networkd[1416]: calid0adbdf9129: Gained IPv6LL Apr 24 23:57:54.147911 containerd[1846]: time="2026-04-24T23:57:54.147866706Z" level=info msg="StopPodSandbox for \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\"" Apr 24 23:57:54.258113 systemd-networkd[1416]: cali36c2de670ac: Gained IPv6LL Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.201 [INFO][5846] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.202 [INFO][5846] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" iface="eth0" netns="/var/run/netns/cni-ee27a658-52de-efb0-7ff4-58e5801bc8f7" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.202 [INFO][5846] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" iface="eth0" netns="/var/run/netns/cni-ee27a658-52de-efb0-7ff4-58e5801bc8f7" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.202 [INFO][5846] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" iface="eth0" netns="/var/run/netns/cni-ee27a658-52de-efb0-7ff4-58e5801bc8f7" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.202 [INFO][5846] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.202 [INFO][5846] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.250 [INFO][5853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.250 [INFO][5853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.250 [INFO][5853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.263 [WARNING][5853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.263 [INFO][5853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.265 [INFO][5853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:54.273310 containerd[1846]: 2026-04-24 23:57:54.269 [INFO][5846] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:57:54.274908 containerd[1846]: time="2026-04-24T23:57:54.273507547Z" level=info msg="TearDown network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\" successfully" Apr 24 23:57:54.274908 containerd[1846]: time="2026-04-24T23:57:54.273834452Z" level=info msg="StopPodSandbox for \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\" returns successfully" Apr 24 23:57:54.280185 containerd[1846]: time="2026-04-24T23:57:54.279964341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-br5gx,Uid:0fab5c65-a5ea-4224-bb5b-3fa0147534b7,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:54.284049 systemd[1]: run-netns-cni\x2dee27a658\x2d52de\x2defb0\x2d7ff4\x2d58e5801bc8f7.mount: Deactivated successfully. Apr 24 23:57:54.460017 systemd-networkd[1416]: cali33dedc8c88e: Link UP Apr 24 23:57:54.461321 systemd-networkd[1416]: cali33dedc8c88e: Gained carrier Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.363 [INFO][5863] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0 goldmane-5b85766d88- calico-system 0fab5c65-a5ea-4224-bb5b-3fa0147534b7 1085 0 2026-04-24 23:57:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff goldmane-5b85766d88-br5gx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali33dedc8c88e [] [] }} ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.363 [INFO][5863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.404 [INFO][5875] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" HandleID="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.411 [INFO][5875] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" HandleID="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277f50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"goldmane-5b85766d88-br5gx", "timestamp":"2026-04-24 23:57:54.404326163 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000206580)} Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.411 [INFO][5875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.411 [INFO][5875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.411 [INFO][5875] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.413 [INFO][5875] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.417 [INFO][5875] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.421 [INFO][5875] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.423 [INFO][5875] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.425 [INFO][5875] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.425 [INFO][5875] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.429 [INFO][5875] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32 Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.434 [INFO][5875] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.443 [INFO][5875] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.7/26] block=192.168.26.0/26 handle="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.444 [INFO][5875] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.7/26] handle="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.444 [INFO][5875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:54.488872 containerd[1846]: 2026-04-24 23:57:54.444 [INFO][5875] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.7/26] IPv6=[] ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" HandleID="k8s-pod-network.126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.492296 containerd[1846]: 2026-04-24 23:57:54.449 [INFO][5863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0fab5c65-a5ea-4224-bb5b-3fa0147534b7", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"goldmane-5b85766d88-br5gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33dedc8c88e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:54.492296 containerd[1846]: 2026-04-24 23:57:54.451 [INFO][5863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.7/32] ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.492296 containerd[1846]: 2026-04-24 23:57:54.451 [INFO][5863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33dedc8c88e ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.492296 containerd[1846]: 2026-04-24 23:57:54.462 [INFO][5863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.492296 containerd[1846]: 2026-04-24 23:57:54.463 [INFO][5863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0fab5c65-a5ea-4224-bb5b-3fa0147534b7", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32", Pod:"goldmane-5b85766d88-br5gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33dedc8c88e", MAC:"56:ea:9e:b3:c8:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:54.492296 containerd[1846]: 2026-04-24 23:57:54.484 [INFO][5863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32" Namespace="calico-system" Pod="goldmane-5b85766d88-br5gx" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:57:54.541155 containerd[1846]: time="2026-04-24T23:57:54.541052966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:54.541155 containerd[1846]: time="2026-04-24T23:57:54.541148467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:54.541886 containerd[1846]: time="2026-04-24T23:57:54.541187568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:54.541886 containerd[1846]: time="2026-04-24T23:57:54.541348770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:54.660289 containerd[1846]: time="2026-04-24T23:57:54.660241811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-br5gx,Uid:0fab5c65-a5ea-4224-bb5b-3fa0147534b7,Namespace:calico-system,Attempt:1,} returns sandbox id \"126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32\"" Apr 24 23:57:55.091520 systemd-networkd[1416]: calic48014c72ee: Gained IPv6LL Apr 24 23:57:55.147263 containerd[1846]: time="2026-04-24T23:57:55.146497934Z" level=info msg="StopPodSandbox for \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\"" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.219 [INFO][5963] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.219 [INFO][5963] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" iface="eth0" netns="/var/run/netns/cni-a387bc62-56ee-fcd4-a4d1-cb49ba251782" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.220 [INFO][5963] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" iface="eth0" netns="/var/run/netns/cni-a387bc62-56ee-fcd4-a4d1-cb49ba251782" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.221 [INFO][5963] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" iface="eth0" netns="/var/run/netns/cni-a387bc62-56ee-fcd4-a4d1-cb49ba251782" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.221 [INFO][5963] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.221 [INFO][5963] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.272 [INFO][5970] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.272 [INFO][5970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.272 [INFO][5970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.280 [WARNING][5970] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.281 [INFO][5970] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.282 [INFO][5970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.288380 containerd[1846]: 2026-04-24 23:57:55.284 [INFO][5963] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:57:55.289295 containerd[1846]: time="2026-04-24T23:57:55.288525914Z" level=info msg="TearDown network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\" successfully" Apr 24 23:57:55.289295 containerd[1846]: time="2026-04-24T23:57:55.288561015Z" level=info msg="StopPodSandbox for \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\" returns successfully" Apr 24 23:57:55.292407 containerd[1846]: time="2026-04-24T23:57:55.291555459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-rcjl8,Uid:73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b,Namespace:calico-system,Attempt:1,}" Apr 24 23:57:55.294855 systemd[1]: run-netns-cni\x2da387bc62\x2d56ee\x2dfcd4\x2da4d1\x2dcb49ba251782.mount: Deactivated successfully. Apr 24 23:57:55.475122 systemd-networkd[1416]: cali33dedc8c88e: Gained IPv6LL Apr 24 23:57:55.506848 systemd-networkd[1416]: cali564106e0590: Link UP Apr 24 23:57:55.513061 systemd-networkd[1416]: cali564106e0590: Gained carrier Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.393 [INFO][5976] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0 calico-apiserver-7b4999c544- calico-system 73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b 1092 0 2026-04-24 23:57:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b4999c544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-bfbb2fd0ff calico-apiserver-7b4999c544-rcjl8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali564106e0590 [] [] }} ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.393 [INFO][5976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.443 [INFO][5989] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" HandleID="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.455 [INFO][5989] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" HandleID="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-bfbb2fd0ff", "pod":"calico-apiserver-7b4999c544-rcjl8", "timestamp":"2026-04-24 23:57:55.443060978 +0000 UTC"}, Hostname:"ci-4081.3.6-n-bfbb2fd0ff", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00025fb80)} Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.455 [INFO][5989] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.455 [INFO][5989] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.455 [INFO][5989] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-bfbb2fd0ff' Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.458 [INFO][5989] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.463 [INFO][5989] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.469 [INFO][5989] ipam/ipam.go 526: Trying affinity for 192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.471 [INFO][5989] ipam/ipam.go 160: Attempting to load block cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.474 [INFO][5989] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.26.0/26 host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.474 [INFO][5989] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.26.0/26 handle="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.476 [INFO][5989] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870 Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.481 [INFO][5989] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.26.0/26 handle="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.497 [INFO][5989] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.26.8/26] block=192.168.26.0/26 handle="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.498 [INFO][5989] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.26.8/26] handle="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" host="ci-4081.3.6-n-bfbb2fd0ff" Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.498 [INFO][5989] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:57:55.550859 containerd[1846]: 2026-04-24 23:57:55.498 [INFO][5989] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.26.8/26] IPv6=[] ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" HandleID="k8s-pod-network.b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.553877 containerd[1846]: 2026-04-24 23:57:55.502 [INFO][5976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"", Pod:"calico-apiserver-7b4999c544-rcjl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali564106e0590", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.553877 containerd[1846]: 2026-04-24 23:57:55.503 [INFO][5976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.8/32] ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.553877 containerd[1846]: 2026-04-24 23:57:55.503 [INFO][5976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali564106e0590 ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.553877 containerd[1846]: 2026-04-24 23:57:55.512 [INFO][5976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.553877 containerd[1846]: 2026-04-24 23:57:55.514 [INFO][5976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870", Pod:"calico-apiserver-7b4999c544-rcjl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali564106e0590", MAC:"46:bb:ed:d7:eb:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:57:55.553877 containerd[1846]: 2026-04-24 23:57:55.537 [INFO][5976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870" Namespace="calico-system" Pod="calico-apiserver-7b4999c544-rcjl8" WorkloadEndpoint="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:57:55.601983 systemd-networkd[1416]: cali911c256085b: Gained IPv6LL Apr 24 23:57:55.617119 containerd[1846]: time="2026-04-24T23:57:55.615941404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:57:55.617119 containerd[1846]: time="2026-04-24T23:57:55.616047708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:57:55.617119 containerd[1846]: time="2026-04-24T23:57:55.616137812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:55.617119 containerd[1846]: time="2026-04-24T23:57:55.616305218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:57:55.759344 containerd[1846]: time="2026-04-24T23:57:55.759228454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4999c544-rcjl8,Uid:73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870\"" Apr 24 23:57:56.373182 containerd[1846]: time="2026-04-24T23:57:56.373129933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:56.376049 containerd[1846]: time="2026-04-24T23:57:56.375909340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:57:56.379418 containerd[1846]: time="2026-04-24T23:57:56.379283871Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:56.384149 containerd[1846]: time="2026-04-24T23:57:56.384099357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:56.384922 containerd[1846]: time="2026-04-24T23:57:56.384883288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.486093877s" Apr 24 23:57:56.384992 containerd[1846]: time="2026-04-24T23:57:56.384924089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:57:56.386694 containerd[1846]: time="2026-04-24T23:57:56.386469549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:57:56.394157 containerd[1846]: time="2026-04-24T23:57:56.394127546Z" level=info msg="CreateContainer within sandbox \"2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:57:56.430793 containerd[1846]: time="2026-04-24T23:57:56.430733764Z" level=info msg="CreateContainer within sandbox \"2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7fde0514fc0571d04b863dc52662a94003a4fdb7b5ac9cc2cd89d2577916e452\"" Apr 24 23:57:56.432084 containerd[1846]: time="2026-04-24T23:57:56.432048415Z" level=info msg="StartContainer for \"7fde0514fc0571d04b863dc52662a94003a4fdb7b5ac9cc2cd89d2577916e452\"" Apr 24 23:57:56.510777 containerd[1846]: time="2026-04-24T23:57:56.510721462Z" level=info msg="StartContainer for \"7fde0514fc0571d04b863dc52662a94003a4fdb7b5ac9cc2cd89d2577916e452\" returns successfully" Apr 24 23:57:57.009880 systemd-networkd[1416]: cali564106e0590: Gained IPv6LL Apr 24 23:57:57.245179 systemd[1]: run-containerd-runc-k8s.io-7fde0514fc0571d04b863dc52662a94003a4fdb7b5ac9cc2cd89d2577916e452-runc.5FzXNv.mount: Deactivated successfully. Apr 24 23:57:57.574139 kubelet[3462]: I0424 23:57:57.573874 3462 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:57:57.916995 containerd[1846]: time="2026-04-24T23:57:57.916943730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:57.919897 containerd[1846]: time="2026-04-24T23:57:57.919560931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:57:57.922855 containerd[1846]: time="2026-04-24T23:57:57.922819957Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:57.927391 containerd[1846]: time="2026-04-24T23:57:57.927306731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:57:57.928594 containerd[1846]: time="2026-04-24T23:57:57.928451975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.541948625s" Apr 24 23:57:57.928594 containerd[1846]: time="2026-04-24T23:57:57.928488177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:57:57.930444 containerd[1846]: time="2026-04-24T23:57:57.930418151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:57:57.937497 containerd[1846]: time="2026-04-24T23:57:57.937469725Z" level=info msg="CreateContainer within sandbox \"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:57:57.986395 containerd[1846]: time="2026-04-24T23:57:57.986349118Z" level=info msg="CreateContainer within sandbox \"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"25b9bbc001b5cc42ff5538f726dd8725448ae8454ded32f602fe7ea64c4ab8df\"" Apr 24 23:57:57.987469 containerd[1846]: time="2026-04-24T23:57:57.987326556Z" level=info msg="StartContainer for \"25b9bbc001b5cc42ff5538f726dd8725448ae8454ded32f602fe7ea64c4ab8df\"" Apr 24 23:57:58.056963 containerd[1846]: time="2026-04-24T23:57:58.056924351Z" level=info msg="StartContainer for \"25b9bbc001b5cc42ff5538f726dd8725448ae8454ded32f602fe7ea64c4ab8df\" returns successfully" Apr 24 23:58:00.471143 containerd[1846]: time="2026-04-24T23:58:00.471094660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:00.474029 containerd[1846]: time="2026-04-24T23:58:00.473838967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:58:00.476787 containerd[1846]: time="2026-04-24T23:58:00.476735879Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:00.481139 containerd[1846]: time="2026-04-24T23:58:00.480871539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:00.481675 containerd[1846]: time="2026-04-24T23:58:00.481629268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.551176516s" Apr 24 23:58:00.481675 containerd[1846]: time="2026-04-24T23:58:00.481672670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:58:00.483765 containerd[1846]: time="2026-04-24T23:58:00.483623045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:58:00.509863 containerd[1846]: time="2026-04-24T23:58:00.509821160Z" level=info msg="CreateContainer within sandbox \"e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:58:00.541901 containerd[1846]: time="2026-04-24T23:58:00.541851701Z" level=info msg="CreateContainer within sandbox \"e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1d2aaf9cd5fc5487b1ba63545d90f3ff5bb9c26c2cf4b96f40b80f5364692a39\"" Apr 24 23:58:00.542573 containerd[1846]: time="2026-04-24T23:58:00.542441124Z" level=info msg="StartContainer for \"1d2aaf9cd5fc5487b1ba63545d90f3ff5bb9c26c2cf4b96f40b80f5364692a39\"" Apr 24 23:58:00.626911 containerd[1846]: time="2026-04-24T23:58:00.626815992Z" level=info msg="StartContainer for \"1d2aaf9cd5fc5487b1ba63545d90f3ff5bb9c26c2cf4b96f40b80f5364692a39\" returns successfully" Apr 24 23:58:01.614781 kubelet[3462]: I0424 23:58:01.614208 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-796d4d88bb-v74px" podStartSLOduration=52.909853801 podStartE2EDuration="59.61418482s" podCreationTimestamp="2026-04-24 23:57:02 +0000 UTC" firstStartedPulling="2026-04-24 23:57:53.778527997 +0000 UTC m=+71.739655605" lastFinishedPulling="2026-04-24 23:58:00.482859116 +0000 UTC m=+78.443986624" observedRunningTime="2026-04-24 23:58:01.613820514 +0000 UTC m=+79.574948022" watchObservedRunningTime="2026-04-24 23:58:01.61418482 +0000 UTC m=+79.575312328" Apr 24 23:58:01.614781 kubelet[3462]: I0424 23:58:01.614563 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b4999c544-clwfk" podStartSLOduration=57.126368583 podStartE2EDuration="1m0.614548926s" podCreationTimestamp="2026-04-24 23:57:01 +0000 UTC" firstStartedPulling="2026-04-24 23:57:52.898157801 +0000 UTC m=+70.859285309" lastFinishedPulling="2026-04-24 23:57:56.386338044 +0000 UTC m=+74.347465652" observedRunningTime="2026-04-24 23:57:56.594510607 +0000 UTC m=+74.555638215" watchObservedRunningTime="2026-04-24 23:58:01.614548926 +0000 UTC m=+79.575676434" Apr 24 23:58:01.634643 systemd[1]: run-containerd-runc-k8s.io-1d2aaf9cd5fc5487b1ba63545d90f3ff5bb9c26c2cf4b96f40b80f5364692a39-runc.ghTLhs.mount: Deactivated successfully. Apr 24 23:58:03.563890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360471749.mount: Deactivated successfully. Apr 24 23:58:04.116938 containerd[1846]: time="2026-04-24T23:58:04.116886932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:04.119700 containerd[1846]: time="2026-04-24T23:58:04.119539874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:58:04.123415 containerd[1846]: time="2026-04-24T23:58:04.123233734Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:04.128329 containerd[1846]: time="2026-04-24T23:58:04.128204113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:04.129717 containerd[1846]: time="2026-04-24T23:58:04.129158329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.645494781s" Apr 24 23:58:04.129717 containerd[1846]: time="2026-04-24T23:58:04.129197229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:58:04.130456 containerd[1846]: time="2026-04-24T23:58:04.130437649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:58:04.138902 containerd[1846]: time="2026-04-24T23:58:04.138874285Z" level=info msg="CreateContainer within sandbox \"126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:58:04.174733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048710006.mount: Deactivated successfully. Apr 24 23:58:04.185217 containerd[1846]: time="2026-04-24T23:58:04.185183328Z" level=info msg="CreateContainer within sandbox \"126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1ff08a9b238d442fd0a12323452ce97d938962a887689353f34caa751cbebe1d\"" Apr 24 23:58:04.186032 containerd[1846]: time="2026-04-24T23:58:04.185974540Z" level=info msg="StartContainer for \"1ff08a9b238d442fd0a12323452ce97d938962a887689353f34caa751cbebe1d\"" Apr 24 23:58:04.267983 containerd[1846]: time="2026-04-24T23:58:04.267934355Z" level=info msg="StartContainer for \"1ff08a9b238d442fd0a12323452ce97d938962a887689353f34caa751cbebe1d\" returns successfully" Apr 24 23:58:04.467359 containerd[1846]: time="2026-04-24T23:58:04.467164752Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:04.469901 containerd[1846]: time="2026-04-24T23:58:04.469847895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:58:04.471855 containerd[1846]: time="2026-04-24T23:58:04.471811827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 341.263576ms" Apr 24 23:58:04.471855 containerd[1846]: time="2026-04-24T23:58:04.471851527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:58:04.473295 containerd[1846]: time="2026-04-24T23:58:04.472837043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:58:04.479612 containerd[1846]: time="2026-04-24T23:58:04.479579351Z" level=info msg="CreateContainer within sandbox \"b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:58:04.517212 containerd[1846]: time="2026-04-24T23:58:04.517165755Z" level=info msg="CreateContainer within sandbox \"b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bb9e0b93d3c3549e84109210a8d2d33e256f93df92050401bb6679798c36a53f\"" Apr 24 23:58:04.517939 containerd[1846]: time="2026-04-24T23:58:04.517827165Z" level=info msg="StartContainer for \"bb9e0b93d3c3549e84109210a8d2d33e256f93df92050401bb6679798c36a53f\"" Apr 24 23:58:04.592421 containerd[1846]: time="2026-04-24T23:58:04.592364761Z" level=info msg="StartContainer for \"bb9e0b93d3c3549e84109210a8d2d33e256f93df92050401bb6679798c36a53f\" returns successfully" Apr 24 23:58:04.652115 kubelet[3462]: I0424 23:58:04.652052 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b4999c544-rcjl8" podStartSLOduration=54.941575448 podStartE2EDuration="1m3.652030019s" podCreationTimestamp="2026-04-24 23:57:01 +0000 UTC" firstStartedPulling="2026-04-24 23:57:55.762184369 +0000 UTC m=+73.723311877" lastFinishedPulling="2026-04-24 23:58:04.47263884 +0000 UTC m=+82.433766448" observedRunningTime="2026-04-24 23:58:04.624120171 +0000 UTC m=+82.585247679" watchObservedRunningTime="2026-04-24 23:58:04.652030019 +0000 UTC m=+82.613157527" Apr 24 23:58:05.834888 kubelet[3462]: I0424 23:58:05.832733 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-br5gx" podStartSLOduration=55.364053949 podStartE2EDuration="1m4.832711564s" podCreationTimestamp="2026-04-24 23:57:01 +0000 UTC" firstStartedPulling="2026-04-24 23:57:54.661597431 +0000 UTC m=+72.622724939" lastFinishedPulling="2026-04-24 23:58:04.130255046 +0000 UTC m=+82.091382554" observedRunningTime="2026-04-24 23:58:04.652455325 +0000 UTC m=+82.613582833" watchObservedRunningTime="2026-04-24 23:58:05.832711564 +0000 UTC m=+83.793839072" Apr 24 23:58:06.295798 containerd[1846]: time="2026-04-24T23:58:06.295600791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:06.298259 containerd[1846]: time="2026-04-24T23:58:06.298088831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:58:06.302105 containerd[1846]: time="2026-04-24T23:58:06.300772074Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:06.306105 containerd[1846]: time="2026-04-24T23:58:06.305291647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:06.306105 containerd[1846]: time="2026-04-24T23:58:06.305967657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.833094613s" Apr 24 23:58:06.306105 containerd[1846]: time="2026-04-24T23:58:06.306002458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:58:06.314615 containerd[1846]: time="2026-04-24T23:58:06.314586696Z" level=info msg="CreateContainer within sandbox \"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:58:06.342735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143154490.mount: Deactivated successfully. Apr 24 23:58:06.348111 containerd[1846]: time="2026-04-24T23:58:06.348073033Z" level=info msg="CreateContainer within sandbox \"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c2f33600bb728c76b7860a1c8ef0d9c72d630be84404b533605718ac935380e1\"" Apr 24 23:58:06.349307 containerd[1846]: time="2026-04-24T23:58:06.348611242Z" level=info msg="StartContainer for \"c2f33600bb728c76b7860a1c8ef0d9c72d630be84404b533605718ac935380e1\"" Apr 24 23:58:06.418302 containerd[1846]: time="2026-04-24T23:58:06.418154457Z" level=info msg="StartContainer for \"c2f33600bb728c76b7860a1c8ef0d9c72d630be84404b533605718ac935380e1\" returns successfully" Apr 24 23:58:07.278069 kubelet[3462]: I0424 23:58:07.278028 3462 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:58:07.278069 kubelet[3462]: I0424 23:58:07.278077 3462 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:58:11.560917 kubelet[3462]: I0424 23:58:11.560151 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vghcg" podStartSLOduration=56.919780558 podStartE2EDuration="1m9.560131074s" podCreationTimestamp="2026-04-24 23:57:02 +0000 UTC" firstStartedPulling="2026-04-24 23:57:53.666642158 +0000 UTC m=+71.627769666" lastFinishedPulling="2026-04-24 23:58:06.306992674 +0000 UTC m=+84.268120182" observedRunningTime="2026-04-24 23:58:06.63831099 +0000 UTC m=+84.599438598" watchObservedRunningTime="2026-04-24 23:58:11.560131074 +0000 UTC m=+89.521258682" Apr 24 23:58:24.672361 kubelet[3462]: I0424 23:58:24.671882 3462 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:58:42.663415 containerd[1846]: time="2026-04-24T23:58:42.663371013Z" level=info msg="StopPodSandbox for \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\"" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.696 [WARNING][6595] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"5f198955-c9d4-4104-9f87-079239cf8c8a", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a", Pod:"calico-apiserver-7b4999c544-clwfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36c2de670ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.697 [INFO][6595] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.697 [INFO][6595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" iface="eth0" netns="" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.697 [INFO][6595] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.697 [INFO][6595] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.720 [INFO][6602] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.720 [INFO][6602] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.720 [INFO][6602] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.729 [WARNING][6602] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.729 [INFO][6602] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.731 [INFO][6602] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:42.734345 containerd[1846]: 2026-04-24 23:58:42.733 [INFO][6595] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.735111 containerd[1846]: time="2026-04-24T23:58:42.734386066Z" level=info msg="TearDown network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\" successfully" Apr 24 23:58:42.735111 containerd[1846]: time="2026-04-24T23:58:42.734419567Z" level=info msg="StopPodSandbox for \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\" returns successfully" Apr 24 23:58:42.735111 containerd[1846]: time="2026-04-24T23:58:42.734910774Z" level=info msg="RemovePodSandbox for \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\"" Apr 24 23:58:42.735111 containerd[1846]: time="2026-04-24T23:58:42.734955675Z" level=info msg="Forcibly stopping sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\"" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.767 [WARNING][6616] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"5f198955-c9d4-4104-9f87-079239cf8c8a", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"2f57011eed3f837f3f6eaf87dbe46409093f793b2a43159eadba1c654fe9558a", Pod:"calico-apiserver-7b4999c544-clwfk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali36c2de670ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.768 [INFO][6616] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.768 [INFO][6616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" iface="eth0" netns="" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.768 [INFO][6616] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.768 [INFO][6616] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.790 [INFO][6624] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.790 [INFO][6624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.790 [INFO][6624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.797 [WARNING][6624] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.797 [INFO][6624] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" HandleID="k8s-pod-network.e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--clwfk-eth0" Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.798 [INFO][6624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:42.801770 containerd[1846]: 2026-04-24 23:58:42.799 [INFO][6616] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833" Apr 24 23:58:42.801770 containerd[1846]: time="2026-04-24T23:58:42.800808752Z" level=info msg="TearDown network for sandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\" successfully" Apr 24 23:58:42.815706 containerd[1846]: time="2026-04-24T23:58:42.815660172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:42.815836 containerd[1846]: time="2026-04-24T23:58:42.815754973Z" level=info msg="RemovePodSandbox \"e35e2bca3c2795c963176f8c5a965a6efac8a0991d47daef94fdae194495d833\" returns successfully" Apr 24 23:58:42.816367 containerd[1846]: time="2026-04-24T23:58:42.816334282Z" level=info msg="StopPodSandbox for \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\"" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.848 [WARNING][6638] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c5fd00b-3814-4bd0-8192-1d2f719f9517", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1", Pod:"csi-node-driver-vghcg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic48014c72ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.848 [INFO][6638] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.848 [INFO][6638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" iface="eth0" netns="" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.848 [INFO][6638] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.848 [INFO][6638] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.871 [INFO][6645] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.871 [INFO][6645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.871 [INFO][6645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.877 [WARNING][6645] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.877 [INFO][6645] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.878 [INFO][6645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:42.880970 containerd[1846]: 2026-04-24 23:58:42.879 [INFO][6638] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.881678 containerd[1846]: time="2026-04-24T23:58:42.881012742Z" level=info msg="TearDown network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\" successfully" Apr 24 23:58:42.881678 containerd[1846]: time="2026-04-24T23:58:42.881046842Z" level=info msg="StopPodSandbox for \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\" returns successfully" Apr 24 23:58:42.881678 containerd[1846]: time="2026-04-24T23:58:42.881626651Z" level=info msg="RemovePodSandbox for \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\"" Apr 24 23:58:42.881678 containerd[1846]: time="2026-04-24T23:58:42.881661251Z" level=info msg="Forcibly stopping sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\"" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.915 [WARNING][6659] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c5fd00b-3814-4bd0-8192-1d2f719f9517", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"853fb55e7d5013b21d9ec32afb80a87ed6e2f23c552e44f1277b6c0f2926e0f1", Pod:"csi-node-driver-vghcg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic48014c72ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.915 [INFO][6659] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.915 [INFO][6659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" iface="eth0" netns="" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.915 [INFO][6659] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.915 [INFO][6659] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.937 [INFO][6666] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.937 [INFO][6666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.937 [INFO][6666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.943 [WARNING][6666] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.943 [INFO][6666] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" HandleID="k8s-pod-network.2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-csi--node--driver--vghcg-eth0" Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.945 [INFO][6666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:42.947626 containerd[1846]: 2026-04-24 23:58:42.946 [INFO][6659] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382" Apr 24 23:58:42.948302 containerd[1846]: time="2026-04-24T23:58:42.947612830Z" level=info msg="TearDown network for sandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\" successfully" Apr 24 23:58:42.961613 containerd[1846]: time="2026-04-24T23:58:42.961371634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:42.961613 containerd[1846]: time="2026-04-24T23:58:42.961469035Z" level=info msg="RemovePodSandbox \"2035ebefda74f31a71dcdaccc857e58f3a4637f8388b42e3f90d5b4ef2a91382\" returns successfully" Apr 24 23:58:42.963174 containerd[1846]: time="2026-04-24T23:58:42.963142160Z" level=info msg="StopPodSandbox for \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\"" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.016 [WARNING][6680] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870", Pod:"calico-apiserver-7b4999c544-rcjl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali564106e0590", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.018 [INFO][6680] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.018 [INFO][6680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" iface="eth0" netns="" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.018 [INFO][6680] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.018 [INFO][6680] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.038 [INFO][6687] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.038 [INFO][6687] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.038 [INFO][6687] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.044 [WARNING][6687] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.044 [INFO][6687] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.046 [INFO][6687] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.049193 containerd[1846]: 2026-04-24 23:58:43.047 [INFO][6680] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.049193 containerd[1846]: time="2026-04-24T23:58:43.049073235Z" level=info msg="TearDown network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\" successfully" Apr 24 23:58:43.049193 containerd[1846]: time="2026-04-24T23:58:43.049093735Z" level=info msg="StopPodSandbox for \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\" returns successfully" Apr 24 23:58:43.050263 containerd[1846]: time="2026-04-24T23:58:43.050217052Z" level=info msg="RemovePodSandbox for \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\"" Apr 24 23:58:43.050390 containerd[1846]: time="2026-04-24T23:58:43.050269553Z" level=info msg="Forcibly stopping sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\"" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.087 [WARNING][6701] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0", GenerateName:"calico-apiserver-7b4999c544-", Namespace:"calico-system", SelfLink:"", UID:"73a8d348-e9a9-4b1d-aa88-08a6fbd7ac5b", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4999c544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"b0e7576ab0f8843662f18f53336a002e7325555e2ca3084375c649a068072870", Pod:"calico-apiserver-7b4999c544-rcjl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.26.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali564106e0590", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.087 [INFO][6701] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.087 [INFO][6701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" iface="eth0" netns="" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.087 [INFO][6701] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.087 [INFO][6701] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.109 [INFO][6708] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.109 [INFO][6708] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.109 [INFO][6708] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.115 [WARNING][6708] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.115 [INFO][6708] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" HandleID="k8s-pod-network.0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--apiserver--7b4999c544--rcjl8-eth0" Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.116 [INFO][6708] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.119238 containerd[1846]: 2026-04-24 23:58:43.118 [INFO][6701] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4" Apr 24 23:58:43.120083 containerd[1846]: time="2026-04-24T23:58:43.119277976Z" level=info msg="TearDown network for sandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\" successfully" Apr 24 23:58:43.130201 containerd[1846]: time="2026-04-24T23:58:43.130035436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:43.130490 containerd[1846]: time="2026-04-24T23:58:43.130328040Z" level=info msg="RemovePodSandbox \"0ae658d804091c5e2dfa6b83437f4c3134327c39e464c8ed668fe4c7f480fba4\" returns successfully" Apr 24 23:58:43.131277 containerd[1846]: time="2026-04-24T23:58:43.131244654Z" level=info msg="StopPodSandbox for \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\"" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.164 [WARNING][6722] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"222e4640-e0ff-4078-9fa0-975f8f1c4ffa", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e", Pod:"coredns-674b8bbfcf-jsmlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2f76222e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.164 [INFO][6722] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.164 [INFO][6722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" iface="eth0" netns="" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.164 [INFO][6722] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.164 [INFO][6722] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.186 [INFO][6729] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.186 [INFO][6729] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.186 [INFO][6729] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.192 [WARNING][6729] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.192 [INFO][6729] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.193 [INFO][6729] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.196384 containerd[1846]: 2026-04-24 23:58:43.195 [INFO][6722] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.197119 containerd[1846]: time="2026-04-24T23:58:43.196405221Z" level=info msg="TearDown network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\" successfully" Apr 24 23:58:43.197119 containerd[1846]: time="2026-04-24T23:58:43.196434521Z" level=info msg="StopPodSandbox for \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\" returns successfully" Apr 24 23:58:43.197235 containerd[1846]: time="2026-04-24T23:58:43.197209833Z" level=info msg="RemovePodSandbox for \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\"" Apr 24 23:58:43.197280 containerd[1846]: time="2026-04-24T23:58:43.197243633Z" level=info msg="Forcibly stopping sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\"" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.229 [WARNING][6744] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"222e4640-e0ff-4078-9fa0-975f8f1c4ffa", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"614043c4ab7005afd6e9e4046f8b0e2d148fad754e59c7a5b27edc873346446e", Pod:"coredns-674b8bbfcf-jsmlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2f76222e51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.229 [INFO][6744] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.230 [INFO][6744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" iface="eth0" netns="" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.230 [INFO][6744] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.230 [INFO][6744] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.251 [INFO][6751] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.251 [INFO][6751] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.252 [INFO][6751] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.258 [WARNING][6751] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.258 [INFO][6751] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" HandleID="k8s-pod-network.5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--jsmlw-eth0" Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.260 [INFO][6751] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.263007 containerd[1846]: 2026-04-24 23:58:43.261 [INFO][6744] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40" Apr 24 23:58:43.264766 containerd[1846]: time="2026-04-24T23:58:43.263760120Z" level=info msg="TearDown network for sandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\" successfully" Apr 24 23:58:43.272461 containerd[1846]: time="2026-04-24T23:58:43.272289747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:43.272565 containerd[1846]: time="2026-04-24T23:58:43.272480649Z" level=info msg="RemovePodSandbox \"5957e0c678f4b82a7c30d788a9b9241a640b284b41fd90fcf0ff7d2225e88a40\" returns successfully" Apr 24 23:58:43.273182 containerd[1846]: time="2026-04-24T23:58:43.273152759Z" level=info msg="StopPodSandbox for \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\"" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.307 [WARNING][6766] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0", GenerateName:"calico-kube-controllers-796d4d88bb-", Namespace:"calico-system", SelfLink:"", UID:"bab90d29-b4f6-45e4-a59d-c7270debd2c4", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796d4d88bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f", Pod:"calico-kube-controllers-796d4d88bb-v74px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali911c256085b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.307 [INFO][6766] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.307 [INFO][6766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" iface="eth0" netns="" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.307 [INFO][6766] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.307 [INFO][6766] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.328 [INFO][6774] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.328 [INFO][6774] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.328 [INFO][6774] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.334 [WARNING][6774] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.334 [INFO][6774] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.335 [INFO][6774] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.337771 containerd[1846]: 2026-04-24 23:58:43.336 [INFO][6766] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.338699 containerd[1846]: time="2026-04-24T23:58:43.337794118Z" level=info msg="TearDown network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\" successfully" Apr 24 23:58:43.338699 containerd[1846]: time="2026-04-24T23:58:43.337825019Z" level=info msg="StopPodSandbox for \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\" returns successfully" Apr 24 23:58:43.338699 containerd[1846]: time="2026-04-24T23:58:43.338496429Z" level=info msg="RemovePodSandbox for \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\"" Apr 24 23:58:43.338699 containerd[1846]: time="2026-04-24T23:58:43.338530529Z" level=info msg="Forcibly stopping sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\"" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.371 [WARNING][6789] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0", GenerateName:"calico-kube-controllers-796d4d88bb-", Namespace:"calico-system", SelfLink:"", UID:"bab90d29-b4f6-45e4-a59d-c7270debd2c4", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"796d4d88bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"e43a93714a568ba6bdf05290b110d55ebd237ff21350f53bd3273df0b6a9d48f", Pod:"calico-kube-controllers-796d4d88bb-v74px", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali911c256085b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.373 [INFO][6789] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.373 [INFO][6789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" iface="eth0" netns="" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.373 [INFO][6789] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.373 [INFO][6789] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.393 [INFO][6796] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.393 [INFO][6796] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.393 [INFO][6796] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.399 [WARNING][6796] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.399 [INFO][6796] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" HandleID="k8s-pod-network.707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-calico--kube--controllers--796d4d88bb--v74px-eth0" Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.400 [INFO][6796] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.403012 containerd[1846]: 2026-04-24 23:58:43.401 [INFO][6789] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a" Apr 24 23:58:43.403657 containerd[1846]: time="2026-04-24T23:58:43.403039986Z" level=info msg="TearDown network for sandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\" successfully" Apr 24 23:58:43.412726 containerd[1846]: time="2026-04-24T23:58:43.412166922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:43.412726 containerd[1846]: time="2026-04-24T23:58:43.412287824Z" level=info msg="RemovePodSandbox \"707bda3d6d1c787ba5898bbd6e7063e023d28bbf24eb14aaf107ddc93e19dc8a\" returns successfully" Apr 24 23:58:43.413094 containerd[1846]: time="2026-04-24T23:58:43.413061835Z" level=info msg="StopPodSandbox for \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\"" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.463 [WARNING][6810] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0fab5c65-a5ea-4224-bb5b-3fa0147534b7", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32", Pod:"goldmane-5b85766d88-br5gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33dedc8c88e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.464 [INFO][6810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.464 [INFO][6810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" iface="eth0" netns="" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.464 [INFO][6810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.464 [INFO][6810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.493 [INFO][6818] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.493 [INFO][6818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.493 [INFO][6818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.501 [WARNING][6818] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.501 [INFO][6818] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.503 [INFO][6818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.506360 containerd[1846]: 2026-04-24 23:58:43.504 [INFO][6810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.506360 containerd[1846]: time="2026-04-24T23:58:43.506200517Z" level=info msg="TearDown network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\" successfully" Apr 24 23:58:43.506360 containerd[1846]: time="2026-04-24T23:58:43.506229617Z" level=info msg="StopPodSandbox for \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\" returns successfully" Apr 24 23:58:43.508298 containerd[1846]: time="2026-04-24T23:58:43.506771925Z" level=info msg="RemovePodSandbox for \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\"" Apr 24 23:58:43.508298 containerd[1846]: time="2026-04-24T23:58:43.506811226Z" level=info msg="Forcibly stopping sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\"" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.568 [WARNING][6832] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0fab5c65-a5ea-4224-bb5b-3fa0147534b7", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"126af3e8423bca6fa5d719f20157431862dd6596300a6f44e4d0e481cf530e32", Pod:"goldmane-5b85766d88-br5gx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.26.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33dedc8c88e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.570 [INFO][6832] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.570 [INFO][6832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" iface="eth0" netns="" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.570 [INFO][6832] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.571 [INFO][6832] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.618 [INFO][6839] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.618 [INFO][6839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.618 [INFO][6839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.631 [WARNING][6839] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.632 [INFO][6839] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" HandleID="k8s-pod-network.1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-goldmane--5b85766d88--br5gx-eth0" Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.636 [INFO][6839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.640082 containerd[1846]: 2026-04-24 23:58:43.637 [INFO][6832] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43" Apr 24 23:58:43.640082 containerd[1846]: time="2026-04-24T23:58:43.639903798Z" level=info msg="TearDown network for sandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\" successfully" Apr 24 23:58:43.651314 containerd[1846]: time="2026-04-24T23:58:43.651256465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:43.651530 containerd[1846]: time="2026-04-24T23:58:43.651339667Z" level=info msg="RemovePodSandbox \"1bde15324d94ccd5f5315b7cfd790d97ed08f2b3bce31435dd4d5109794cfd43\" returns successfully" Apr 24 23:58:43.653934 containerd[1846]: time="2026-04-24T23:58:43.653908705Z" level=info msg="StopPodSandbox for \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\"" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.705 [WARNING][6853] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7eed60e0-dfe6-44af-9c18-1eee2edda56b", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe", Pod:"coredns-674b8bbfcf-f5g7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0adbdf9129", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.706 [INFO][6853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.706 [INFO][6853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" iface="eth0" netns="" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.707 [INFO][6853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.707 [INFO][6853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.751 [INFO][6860] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.752 [INFO][6860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.752 [INFO][6860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.760 [WARNING][6860] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.760 [INFO][6860] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.765 [INFO][6860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.768951 containerd[1846]: 2026-04-24 23:58:43.766 [INFO][6853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.768951 containerd[1846]: time="2026-04-24T23:58:43.768809403Z" level=info msg="TearDown network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\" successfully" Apr 24 23:58:43.768951 containerd[1846]: time="2026-04-24T23:58:43.768840704Z" level=info msg="StopPodSandbox for \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\" returns successfully" Apr 24 23:58:43.774030 containerd[1846]: time="2026-04-24T23:58:43.771499143Z" level=info msg="RemovePodSandbox for \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\"" Apr 24 23:58:43.774030 containerd[1846]: time="2026-04-24T23:58:43.771537043Z" level=info msg="Forcibly stopping sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\"" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.833 [WARNING][6874] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7eed60e0-dfe6-44af-9c18-1eee2edda56b", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-bfbb2fd0ff", ContainerID:"4896fee174f8bf4c02d875df4d4c18bc578ef967f3102e8a5b3cf61dfebcd8fe", Pod:"coredns-674b8bbfcf-f5g7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0adbdf9129", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.833 [INFO][6874] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.833 [INFO][6874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" iface="eth0" netns="" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.833 [INFO][6874] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.833 [INFO][6874] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.869 [INFO][6882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.870 [INFO][6882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.872 [INFO][6882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.881 [WARNING][6882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.882 [INFO][6882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" HandleID="k8s-pod-network.f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Workload="ci--4081.3.6--n--bfbb2fd0ff-k8s-coredns--674b8bbfcf--f5g7p-eth0" Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.884 [INFO][6882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:58:43.891072 containerd[1846]: 2026-04-24 23:58:43.886 [INFO][6874] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6" Apr 24 23:58:43.891719 containerd[1846]: time="2026-04-24T23:58:43.891060610Z" level=info msg="TearDown network for sandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\" successfully" Apr 24 23:58:43.906515 containerd[1846]: time="2026-04-24T23:58:43.906457938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:58:43.906662 containerd[1846]: time="2026-04-24T23:58:43.906551139Z" level=info msg="RemovePodSandbox \"f7defe867564fa10fff3b2de967df85c1bf36b40cc721e5f20cd9b28a69283d6\" returns successfully" Apr 24 23:59:22.533474 systemd[1]: Started sshd@8-10.0.0.31:22-4.175.71.9:49636.service - OpenSSH per-connection server daemon (4.175.71.9:49636). Apr 24 23:59:22.640465 sshd[7018]: Accepted publickey for core from 4.175.71.9 port 49636 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:22.642024 sshd[7018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:22.647221 systemd-logind[1810]: New session 10 of user core. Apr 24 23:59:22.653040 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:59:22.817906 sshd[7018]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:22.822893 systemd[1]: sshd@8-10.0.0.31:22-4.175.71.9:49636.service: Deactivated successfully. Apr 24 23:59:22.828338 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:59:22.829613 systemd-logind[1810]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:59:22.830856 systemd-logind[1810]: Removed session 10. Apr 24 23:59:27.838379 systemd[1]: Started sshd@9-10.0.0.31:22-4.175.71.9:48716.service - OpenSSH per-connection server daemon (4.175.71.9:48716). Apr 24 23:59:27.946717 sshd[7033]: Accepted publickey for core from 4.175.71.9 port 48716 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:27.948223 sshd[7033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:27.953178 systemd-logind[1810]: New session 11 of user core. Apr 24 23:59:27.960988 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:59:28.117295 sshd[7033]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:28.121867 systemd[1]: sshd@9-10.0.0.31:22-4.175.71.9:48716.service: Deactivated successfully. Apr 24 23:59:28.125957 systemd-logind[1810]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:59:28.127316 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:59:28.128340 systemd-logind[1810]: Removed session 11. Apr 24 23:59:33.140227 systemd[1]: Started sshd@10-10.0.0.31:22-4.175.71.9:48726.service - OpenSSH per-connection server daemon (4.175.71.9:48726). Apr 24 23:59:33.255059 sshd[7087]: Accepted publickey for core from 4.175.71.9 port 48726 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:33.256512 sshd[7087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:33.260563 systemd-logind[1810]: New session 12 of user core. Apr 24 23:59:33.265357 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:59:33.417624 sshd[7087]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:33.420909 systemd[1]: sshd@10-10.0.0.31:22-4.175.71.9:48726.service: Deactivated successfully. Apr 24 23:59:33.426506 systemd-logind[1810]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:59:33.427019 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:59:33.428501 systemd-logind[1810]: Removed session 12. Apr 24 23:59:38.440061 systemd[1]: Started sshd@11-10.0.0.31:22-4.175.71.9:36902.service - OpenSSH per-connection server daemon (4.175.71.9:36902). Apr 24 23:59:38.549058 sshd[7122]: Accepted publickey for core from 4.175.71.9 port 36902 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:38.550578 sshd[7122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:38.554801 systemd-logind[1810]: New session 13 of user core. Apr 24 23:59:38.559502 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:59:38.723594 sshd[7122]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:38.727063 systemd[1]: sshd@11-10.0.0.31:22-4.175.71.9:36902.service: Deactivated successfully. Apr 24 23:59:38.731666 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:59:38.733082 systemd-logind[1810]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:59:38.735041 systemd-logind[1810]: Removed session 13. Apr 24 23:59:43.746379 systemd[1]: Started sshd@12-10.0.0.31:22-4.175.71.9:36906.service - OpenSSH per-connection server daemon (4.175.71.9:36906). Apr 24 23:59:43.852244 sshd[7181]: Accepted publickey for core from 4.175.71.9 port 36906 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:43.853656 sshd[7181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:43.858982 systemd-logind[1810]: New session 14 of user core. Apr 24 23:59:43.862056 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:59:44.013901 sshd[7181]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:44.020365 systemd[1]: sshd@12-10.0.0.31:22-4.175.71.9:36906.service: Deactivated successfully. Apr 24 23:59:44.020533 systemd-logind[1810]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:59:44.025090 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:59:44.026256 systemd-logind[1810]: Removed session 14. Apr 24 23:59:44.037015 systemd[1]: Started sshd@13-10.0.0.31:22-4.175.71.9:36918.service - OpenSSH per-connection server daemon (4.175.71.9:36918). Apr 24 23:59:44.145150 sshd[7195]: Accepted publickey for core from 4.175.71.9 port 36918 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:44.147116 sshd[7195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:44.153498 systemd-logind[1810]: New session 15 of user core. Apr 24 23:59:44.159471 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:59:44.355008 sshd[7195]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:44.362082 systemd[1]: sshd@13-10.0.0.31:22-4.175.71.9:36918.service: Deactivated successfully. Apr 24 23:59:44.373838 systemd-logind[1810]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:59:44.374027 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:59:44.390377 systemd[1]: Started sshd@14-10.0.0.31:22-4.175.71.9:36924.service - OpenSSH per-connection server daemon (4.175.71.9:36924). Apr 24 23:59:44.391718 systemd-logind[1810]: Removed session 15. Apr 24 23:59:44.501769 sshd[7207]: Accepted publickey for core from 4.175.71.9 port 36924 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:44.502566 sshd[7207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:44.506796 systemd-logind[1810]: New session 16 of user core. Apr 24 23:59:44.514809 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:59:44.674312 sshd[7207]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:44.677631 systemd[1]: sshd@14-10.0.0.31:22-4.175.71.9:36924.service: Deactivated successfully. Apr 24 23:59:44.683960 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:59:44.685166 systemd-logind[1810]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:59:44.686157 systemd-logind[1810]: Removed session 16. Apr 24 23:59:49.696055 systemd[1]: Started sshd@15-10.0.0.31:22-4.175.71.9:35670.service - OpenSSH per-connection server daemon (4.175.71.9:35670). Apr 24 23:59:49.804085 sshd[7240]: Accepted publickey for core from 4.175.71.9 port 35670 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:49.805833 sshd[7240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:49.810463 systemd-logind[1810]: New session 17 of user core. Apr 24 23:59:49.817772 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:59:49.974069 sshd[7240]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:49.978162 systemd-logind[1810]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:59:49.979364 systemd[1]: sshd@15-10.0.0.31:22-4.175.71.9:35670.service: Deactivated successfully. Apr 24 23:59:49.984201 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:59:49.985424 systemd-logind[1810]: Removed session 17. Apr 24 23:59:54.996199 systemd[1]: Started sshd@16-10.0.0.31:22-4.175.71.9:35674.service - OpenSSH per-connection server daemon (4.175.71.9:35674). Apr 24 23:59:55.108957 sshd[7254]: Accepted publickey for core from 4.175.71.9 port 35674 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:59:55.110424 sshd[7254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:55.115037 systemd-logind[1810]: New session 18 of user core. Apr 24 23:59:55.119256 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:59:55.269874 sshd[7254]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:55.274147 systemd[1]: sshd@16-10.0.0.31:22-4.175.71.9:35674.service: Deactivated successfully. Apr 24 23:59:55.278404 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:59:55.279255 systemd-logind[1810]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:59:55.280301 systemd-logind[1810]: Removed session 18. Apr 25 00:00:00.295046 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 25 00:00:00.299293 systemd[1]: Started sshd@17-10.0.0.31:22-4.175.71.9:50546.service - OpenSSH per-connection server daemon (4.175.71.9:50546). Apr 25 00:00:00.309177 systemd[1]: logrotate.service: Deactivated successfully. Apr 25 00:00:00.418661 sshd[7269]: Accepted publickey for core from 4.175.71.9 port 50546 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:00.420223 sshd[7269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:00.430586 systemd-logind[1810]: New session 19 of user core. Apr 25 00:00:00.436069 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 25 00:00:00.604666 sshd[7269]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:00.610181 systemd[1]: sshd@17-10.0.0.31:22-4.175.71.9:50546.service: Deactivated successfully. Apr 25 00:00:00.616845 systemd[1]: session-19.scope: Deactivated successfully. Apr 25 00:00:00.617915 systemd-logind[1810]: Session 19 logged out. Waiting for processes to exit. Apr 25 00:00:00.620608 systemd-logind[1810]: Removed session 19. Apr 25 00:00:05.628053 systemd[1]: Started sshd@18-10.0.0.31:22-4.175.71.9:51588.service - OpenSSH per-connection server daemon (4.175.71.9:51588). Apr 25 00:00:05.744645 sshd[7303]: Accepted publickey for core from 4.175.71.9 port 51588 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:05.746119 sshd[7303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:05.751076 systemd-logind[1810]: New session 20 of user core. Apr 25 00:00:05.755618 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 25 00:00:05.909975 sshd[7303]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:05.916044 systemd-logind[1810]: Session 20 logged out. Waiting for processes to exit. Apr 25 00:00:05.917115 systemd[1]: sshd@18-10.0.0.31:22-4.175.71.9:51588.service: Deactivated successfully. Apr 25 00:00:05.920759 systemd[1]: session-20.scope: Deactivated successfully. Apr 25 00:00:05.922147 systemd-logind[1810]: Removed session 20. Apr 25 00:00:05.931022 systemd[1]: Started sshd@19-10.0.0.31:22-4.175.71.9:51594.service - OpenSSH per-connection server daemon (4.175.71.9:51594). Apr 25 00:00:06.038063 sshd[7317]: Accepted publickey for core from 4.175.71.9 port 51594 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:06.039559 sshd[7317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:06.044320 systemd-logind[1810]: New session 21 of user core. Apr 25 00:00:06.051237 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 25 00:00:06.262229 sshd[7317]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:06.265607 systemd[1]: sshd@19-10.0.0.31:22-4.175.71.9:51594.service: Deactivated successfully. Apr 25 00:00:06.270638 systemd[1]: session-21.scope: Deactivated successfully. Apr 25 00:00:06.271964 systemd-logind[1810]: Session 21 logged out. Waiting for processes to exit. Apr 25 00:00:06.273254 systemd-logind[1810]: Removed session 21. Apr 25 00:00:06.281994 systemd[1]: Started sshd@20-10.0.0.31:22-4.175.71.9:51600.service - OpenSSH per-connection server daemon (4.175.71.9:51600). Apr 25 00:00:06.395884 sshd[7329]: Accepted publickey for core from 4.175.71.9 port 51600 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:06.396453 sshd[7329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:06.400807 systemd-logind[1810]: New session 22 of user core. Apr 25 00:00:06.406107 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 25 00:00:06.991488 sshd[7329]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:06.999857 systemd-logind[1810]: Session 22 logged out. Waiting for processes to exit. Apr 25 00:00:07.003599 systemd[1]: sshd@20-10.0.0.31:22-4.175.71.9:51600.service: Deactivated successfully. Apr 25 00:00:07.014619 systemd[1]: session-22.scope: Deactivated successfully. Apr 25 00:00:07.036022 systemd[1]: Started sshd@21-10.0.0.31:22-4.175.71.9:51616.service - OpenSSH per-connection server daemon (4.175.71.9:51616). Apr 25 00:00:07.037042 systemd-logind[1810]: Removed session 22. Apr 25 00:00:07.146317 sshd[7374]: Accepted publickey for core from 4.175.71.9 port 51616 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:07.146958 sshd[7374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:07.151799 systemd-logind[1810]: New session 23 of user core. Apr 25 00:00:07.157081 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 25 00:00:07.438894 sshd[7374]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:07.447440 systemd[1]: sshd@21-10.0.0.31:22-4.175.71.9:51616.service: Deactivated successfully. Apr 25 00:00:07.455158 systemd[1]: session-23.scope: Deactivated successfully. Apr 25 00:00:07.455241 systemd-logind[1810]: Session 23 logged out. Waiting for processes to exit. Apr 25 00:00:07.466028 systemd[1]: Started sshd@22-10.0.0.31:22-4.175.71.9:51624.service - OpenSSH per-connection server daemon (4.175.71.9:51624). Apr 25 00:00:07.466925 systemd-logind[1810]: Removed session 23. Apr 25 00:00:07.575278 sshd[7386]: Accepted publickey for core from 4.175.71.9 port 51624 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:07.576735 sshd[7386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:07.581662 systemd-logind[1810]: New session 24 of user core. Apr 25 00:00:07.586389 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 25 00:00:07.740614 sshd[7386]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:07.745342 systemd[1]: sshd@22-10.0.0.31:22-4.175.71.9:51624.service: Deactivated successfully. Apr 25 00:00:07.750264 systemd[1]: session-24.scope: Deactivated successfully. Apr 25 00:00:07.751278 systemd-logind[1810]: Session 24 logged out. Waiting for processes to exit. Apr 25 00:00:07.752270 systemd-logind[1810]: Removed session 24. Apr 25 00:00:12.763044 systemd[1]: Started sshd@23-10.0.0.31:22-4.175.71.9:51634.service - OpenSSH per-connection server daemon (4.175.71.9:51634). Apr 25 00:00:12.870406 sshd[7422]: Accepted publickey for core from 4.175.71.9 port 51634 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:12.871874 sshd[7422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:12.880824 systemd-logind[1810]: New session 25 of user core. Apr 25 00:00:12.883071 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 25 00:00:13.041218 sshd[7422]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:13.047378 systemd[1]: sshd@23-10.0.0.31:22-4.175.71.9:51634.service: Deactivated successfully. Apr 25 00:00:13.047880 systemd-logind[1810]: Session 25 logged out. Waiting for processes to exit. Apr 25 00:00:13.051945 systemd[1]: session-25.scope: Deactivated successfully. Apr 25 00:00:13.053215 systemd-logind[1810]: Removed session 25. Apr 25 00:00:18.067229 systemd[1]: Started sshd@24-10.0.0.31:22-4.175.71.9:55078.service - OpenSSH per-connection server daemon (4.175.71.9:55078). Apr 25 00:00:18.176625 sshd[7439]: Accepted publickey for core from 4.175.71.9 port 55078 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:18.178545 sshd[7439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:18.183605 systemd-logind[1810]: New session 26 of user core. Apr 25 00:00:18.187025 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 25 00:00:18.348052 sshd[7439]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:18.352077 systemd[1]: sshd@24-10.0.0.31:22-4.175.71.9:55078.service: Deactivated successfully. Apr 25 00:00:18.357205 systemd[1]: session-26.scope: Deactivated successfully. Apr 25 00:00:18.358413 systemd-logind[1810]: Session 26 logged out. Waiting for processes to exit. Apr 25 00:00:18.359344 systemd-logind[1810]: Removed session 26. Apr 25 00:00:23.376379 systemd[1]: Started sshd@25-10.0.0.31:22-4.175.71.9:55088.service - OpenSSH per-connection server daemon (4.175.71.9:55088). Apr 25 00:00:23.491769 sshd[7466]: Accepted publickey for core from 4.175.71.9 port 55088 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:23.493103 sshd[7466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:23.498091 systemd-logind[1810]: New session 27 of user core. Apr 25 00:00:23.503082 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 25 00:00:23.657248 sshd[7466]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:23.661592 systemd[1]: sshd@25-10.0.0.31:22-4.175.71.9:55088.service: Deactivated successfully. Apr 25 00:00:23.666533 systemd[1]: session-27.scope: Deactivated successfully. Apr 25 00:00:23.667416 systemd-logind[1810]: Session 27 logged out. Waiting for processes to exit. Apr 25 00:00:23.668377 systemd-logind[1810]: Removed session 27. Apr 25 00:00:28.540712 systemd[1]: run-containerd-runc-k8s.io-1ff08a9b238d442fd0a12323452ce97d938962a887689353f34caa751cbebe1d-runc.ZUNm3R.mount: Deactivated successfully. Apr 25 00:00:28.680088 systemd[1]: Started sshd@26-10.0.0.31:22-4.175.71.9:59996.service - OpenSSH per-connection server daemon (4.175.71.9:59996). Apr 25 00:00:28.786357 sshd[7502]: Accepted publickey for core from 4.175.71.9 port 59996 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:28.787965 sshd[7502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:28.793398 systemd-logind[1810]: New session 28 of user core. Apr 25 00:00:28.798067 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 25 00:00:28.947105 sshd[7502]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:28.950031 systemd[1]: sshd@26-10.0.0.31:22-4.175.71.9:59996.service: Deactivated successfully. Apr 25 00:00:28.956225 systemd-logind[1810]: Session 28 logged out. Waiting for processes to exit. Apr 25 00:00:28.956974 systemd[1]: session-28.scope: Deactivated successfully. Apr 25 00:00:28.959148 systemd-logind[1810]: Removed session 28. Apr 25 00:00:31.625944 systemd[1]: run-containerd-runc-k8s.io-1d2aaf9cd5fc5487b1ba63545d90f3ff5bb9c26c2cf4b96f40b80f5364692a39-runc.F9F9RP.mount: Deactivated successfully. Apr 25 00:00:33.968414 systemd[1]: Started sshd@27-10.0.0.31:22-4.175.71.9:60000.service - OpenSSH per-connection server daemon (4.175.71.9:60000). Apr 25 00:00:34.083870 sshd[7535]: Accepted publickey for core from 4.175.71.9 port 60000 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:34.085470 sshd[7535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:34.091628 systemd-logind[1810]: New session 29 of user core. Apr 25 00:00:34.097344 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 25 00:00:34.255639 sshd[7535]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:34.258938 systemd[1]: sshd@27-10.0.0.31:22-4.175.71.9:60000.service: Deactivated successfully. Apr 25 00:00:34.264654 systemd[1]: session-29.scope: Deactivated successfully. Apr 25 00:00:34.265808 systemd-logind[1810]: Session 29 logged out. Waiting for processes to exit. Apr 25 00:00:34.266698 systemd-logind[1810]: Removed session 29. Apr 25 00:00:39.282049 systemd[1]: Started sshd@28-10.0.0.31:22-4.175.71.9:42764.service - OpenSSH per-connection server daemon (4.175.71.9:42764). Apr 25 00:00:39.390707 sshd[7575]: Accepted publickey for core from 4.175.71.9 port 42764 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:39.392162 sshd[7575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:39.396847 systemd-logind[1810]: New session 30 of user core. Apr 25 00:00:39.402337 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 25 00:00:39.562915 sshd[7575]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:39.566212 systemd[1]: sshd@28-10.0.0.31:22-4.175.71.9:42764.service: Deactivated successfully. Apr 25 00:00:39.572099 systemd-logind[1810]: Session 30 logged out. Waiting for processes to exit. Apr 25 00:00:39.572787 systemd[1]: session-30.scope: Deactivated successfully. Apr 25 00:00:39.573899 systemd-logind[1810]: Removed session 30. Apr 25 00:00:41.330297 systemd[1]: run-containerd-runc-k8s.io-1d2aaf9cd5fc5487b1ba63545d90f3ff5bb9c26c2cf4b96f40b80f5364692a39-runc.Dpg8IW.mount: Deactivated successfully. Apr 25 00:00:41.496434 systemd[1]: run-containerd-runc-k8s.io-a80e0b7b8dc0f6928bd998a5a211ec21a17032ca38247613119881b3fd0ff665-runc.EP6blH.mount: Deactivated successfully. Apr 25 00:00:44.586032 systemd[1]: Started sshd@29-10.0.0.31:22-4.175.71.9:42766.service - OpenSSH per-connection server daemon (4.175.71.9:42766). Apr 25 00:00:44.692873 sshd[7631]: Accepted publickey for core from 4.175.71.9 port 42766 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:44.694414 sshd[7631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:44.699484 systemd-logind[1810]: New session 31 of user core. Apr 25 00:00:44.705024 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 25 00:00:44.855021 sshd[7631]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:44.858815 systemd[1]: sshd@29-10.0.0.31:22-4.175.71.9:42766.service: Deactivated successfully. Apr 25 00:00:44.864813 systemd-logind[1810]: Session 31 logged out. Waiting for processes to exit. Apr 25 00:00:44.865666 systemd[1]: session-31.scope: Deactivated successfully. Apr 25 00:00:44.866840 systemd-logind[1810]: Removed session 31. Apr 25 00:00:49.878347 systemd[1]: Started sshd@30-10.0.0.31:22-4.175.71.9:58976.service - OpenSSH per-connection server daemon (4.175.71.9:58976). Apr 25 00:00:49.986339 sshd[7665]: Accepted publickey for core from 4.175.71.9 port 58976 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:00:49.987895 sshd[7665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:49.992621 systemd-logind[1810]: New session 32 of user core. Apr 25 00:00:50.000121 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 25 00:00:50.151918 sshd[7665]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:50.156652 systemd[1]: sshd@30-10.0.0.31:22-4.175.71.9:58976.service: Deactivated successfully. Apr 25 00:00:50.161252 systemd[1]: session-32.scope: Deactivated successfully. Apr 25 00:00:50.162163 systemd-logind[1810]: Session 32 logged out. Waiting for processes to exit. Apr 25 00:00:50.163206 systemd-logind[1810]: Removed session 32.