Apr 21 12:00:28.117460 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 12:00:28.117485 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:00:28.117497 kernel: BIOS-provided physical RAM map: Apr 21 12:00:28.117504 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 12:00:28.117514 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 21 12:00:28.117520 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 21 12:00:28.117527 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 21 12:00:28.117533 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 21 12:00:28.117546 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 21 12:00:28.117553 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 21 12:00:28.117559 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 21 12:00:28.117569 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 21 12:00:28.117576 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 21 12:00:28.117582 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 21 12:00:28.117595 kernel: printk: bootconsole [earlyser0] enabled Apr 21 12:00:28.117604 kernel: NX (Execute Disable) protection: active Apr 21 12:00:28.117610 kernel: APIC: Static calls initialized Apr 21 12:00:28.117621 kernel: efi: EFI v2.7 by Microsoft Apr 21 12:00:28.117629 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f41e798 Apr 21 12:00:28.117636 kernel: SMBIOS 3.1.0 present. Apr 21 12:00:28.117647 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 21 12:00:28.117654 kernel: Hypervisor detected: Microsoft Hyper-V Apr 21 12:00:28.117663 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 21 12:00:28.117672 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 21 12:00:28.117679 kernel: Hyper-V: Nested features: 0x1e0101 Apr 21 12:00:28.117692 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 21 12:00:28.117701 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 21 12:00:28.117710 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:00:28.117721 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:00:28.117742 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 21 12:00:28.117750 kernel: tsc: Detected 2593.906 MHz processor Apr 21 12:00:28.117758 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 12:00:28.117770 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 12:00:28.117777 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 21 12:00:28.117790 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 12:00:28.117798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 12:00:28.117807 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 21 12:00:28.117816 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 21 12:00:28.117823 kernel: Using GB pages for direct mapping Apr 21 12:00:28.117834 kernel: Secure boot disabled Apr 21 12:00:28.117845 kernel: ACPI: Early table checksum verification disabled Apr 21 12:00:28.117860 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 21 12:00:28.117867 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117879 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117888 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 21 12:00:28.117897 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 21 12:00:28.117907 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117914 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117928 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117936 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117947 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117955 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117965 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 21 12:00:28.117974 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 21 12:00:28.117982 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 21 12:00:28.117994 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 21 12:00:28.118001 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 21 12:00:28.118016 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 21 12:00:28.118024 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 21 12:00:28.118033 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 21 12:00:28.118043 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 21 12:00:28.118050 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 12:00:28.118062 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 12:00:28.118069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 21 12:00:28.118079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 21 12:00:28.118089 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 21 12:00:28.118101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 21 12:00:28.118111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 21 12:00:28.118123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 21 12:00:28.118131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 21 12:00:28.118142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 21 12:00:28.118154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 21 12:00:28.118162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 21 12:00:28.118169 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 21 12:00:28.118183 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 21 12:00:28.118191 kernel: Zone ranges: Apr 21 12:00:28.118201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 12:00:28.118210 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 12:00:28.118218 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:00:28.118229 kernel: Movable zone start for each node Apr 21 12:00:28.118237 kernel: Early memory node ranges Apr 21 12:00:28.118247 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 12:00:28.118256 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 21 12:00:28.118268 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 21 12:00:28.118278 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 21 12:00:28.118285 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:00:28.118297 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 21 12:00:28.118305 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 12:00:28.118315 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 12:00:28.118324 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 21 12:00:28.118336 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 21 12:00:28.118344 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 21 12:00:28.118357 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 21 12:00:28.118369 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 21 12:00:28.118377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 12:00:28.118389 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 12:00:28.118405 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 21 12:00:28.118415 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 12:00:28.118427 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 21 12:00:28.118434 kernel: Booting paravirtualized kernel on Hyper-V Apr 21 12:00:28.118442 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 12:00:28.118452 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 12:00:28.118460 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 12:00:28.118467 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 12:00:28.118474 kernel: pcpu-alloc: [0] 0 1 Apr 21 12:00:28.118481 kernel: Hyper-V: PV spinlocks enabled Apr 21 12:00:28.118491 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 12:00:28.118502 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:00:28.118510 kernel: random: crng init done Apr 21 12:00:28.118523 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 21 12:00:28.118532 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 12:00:28.118540 kernel: Fallback order for Node 0: 0 Apr 21 12:00:28.118551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 21 12:00:28.118562 kernel: Policy zone: Normal Apr 21 12:00:28.118571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 12:00:28.118583 kernel: software IO TLB: area num 2. Apr 21 12:00:28.118594 kernel: Memory: 8065972K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316996K reserved, 0K cma-reserved) Apr 21 12:00:28.118604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 12:00:28.118624 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 12:00:28.118634 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 12:00:28.118641 kernel: Dynamic Preempt: voluntary Apr 21 12:00:28.118655 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 12:00:28.118664 kernel: rcu: RCU event tracing is enabled. Apr 21 12:00:28.118677 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 12:00:28.118685 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 12:00:28.118697 kernel: Rude variant of Tasks RCU enabled. Apr 21 12:00:28.118706 kernel: Tracing variant of Tasks RCU enabled. Apr 21 12:00:28.118719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 12:00:28.122780 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 12:00:28.122806 kernel: Using NULL legacy PIC Apr 21 12:00:28.122826 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 21 12:00:28.122846 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 12:00:28.122863 kernel: Console: colour dummy device 80x25 Apr 21 12:00:28.122878 kernel: printk: console [tty1] enabled Apr 21 12:00:28.122894 kernel: printk: console [ttyS0] enabled Apr 21 12:00:28.122917 kernel: printk: bootconsole [earlyser0] disabled Apr 21 12:00:28.122933 kernel: ACPI: Core revision 20230628 Apr 21 12:00:28.122950 kernel: Failed to register legacy timer interrupt Apr 21 12:00:28.122969 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 12:00:28.122986 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 21 12:00:28.123004 kernel: Hyper-V: Using IPI hypercalls Apr 21 12:00:28.123020 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 21 12:00:28.123039 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 21 12:00:28.123056 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 21 12:00:28.123076 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 21 12:00:28.123092 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 21 12:00:28.123111 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 21 12:00:28.123127 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 21 12:00:28.123142 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 12:00:28.123158 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 12:00:28.123175 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 12:00:28.123191 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 12:00:28.123207 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 12:00:28.123226 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 12:00:28.123250 kernel: RETBleed: Vulnerable Apr 21 12:00:28.123266 kernel: Speculative Store Bypass: Vulnerable Apr 21 12:00:28.123283 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:00:28.123298 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:00:28.123314 kernel: active return thunk: its_return_thunk Apr 21 12:00:28.123331 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 12:00:28.123346 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 12:00:28.123363 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 12:00:28.123382 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 12:00:28.123400 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 12:00:28.123420 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 12:00:28.123437 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 12:00:28.123452 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 12:00:28.123467 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 12:00:28.123479 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 12:00:28.123492 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 12:00:28.123504 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 12:00:28.123516 kernel: Freeing SMP alternatives memory: 32K Apr 21 12:00:28.123528 kernel: pid_max: default: 32768 minimum: 301 Apr 21 12:00:28.123540 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 12:00:28.123550 kernel: landlock: Up and running. Apr 21 12:00:28.123558 kernel: SELinux: Initializing. Apr 21 12:00:28.123573 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.123584 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.123595 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 12:00:28.123605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:00:28.123617 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:00:28.123628 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:00:28.123639 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 12:00:28.123647 kernel: signal: max sigframe size: 3632 Apr 21 12:00:28.123655 kernel: rcu: Hierarchical SRCU implementation. Apr 21 12:00:28.123671 kernel: rcu: Max phase no-delay instances is 400. Apr 21 12:00:28.123679 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 12:00:28.123687 kernel: smp: Bringing up secondary CPUs ... Apr 21 12:00:28.123695 kernel: smpboot: x86: Booting SMP configuration: Apr 21 12:00:28.123705 kernel: .... node #0, CPUs: #1 Apr 21 12:00:28.123716 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 21 12:00:28.123740 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 12:00:28.123753 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 12:00:28.123761 kernel: smpboot: Max logical packages: 1 Apr 21 12:00:28.123777 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 21 12:00:28.123785 kernel: devtmpfs: initialized Apr 21 12:00:28.123796 kernel: x86/mm: Memory block size: 128MB Apr 21 12:00:28.123806 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 21 12:00:28.123816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 12:00:28.123827 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 12:00:28.123835 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 12:00:28.123843 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 12:00:28.123851 kernel: audit: initializing netlink subsys (disabled) Apr 21 12:00:28.123861 kernel: audit: type=2000 audit(1776772827.030:1): state=initialized audit_enabled=0 res=1 Apr 21 12:00:28.123874 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 12:00:28.123882 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 12:00:28.123892 kernel: cpuidle: using governor menu Apr 21 12:00:28.123902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 12:00:28.123910 kernel: dca service started, version 1.12.1 Apr 21 12:00:28.123922 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 21 12:00:28.123932 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 21 12:00:28.123940 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 12:00:28.123955 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 12:00:28.123964 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 12:00:28.123974 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 12:00:28.123984 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 12:00:28.123992 kernel: ACPI: Added _OSI(Module Device) Apr 21 12:00:28.124006 kernel: ACPI: Added _OSI(Processor Device) Apr 21 12:00:28.124014 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 12:00:28.124025 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 12:00:28.124037 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 12:00:28.124047 kernel: ACPI: Interpreter enabled Apr 21 12:00:28.124058 kernel: ACPI: PM: (supports S0 S5) Apr 21 12:00:28.124066 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 12:00:28.124079 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 12:00:28.124087 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 21 12:00:28.124099 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 21 12:00:28.124108 kernel: iommu: Default domain type: Translated Apr 21 12:00:28.124121 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 12:00:28.124132 kernel: efivars: Registered efivars operations Apr 21 12:00:28.124144 kernel: PCI: Using ACPI for IRQ routing Apr 21 12:00:28.124157 kernel: PCI: System does not support PCI Apr 21 12:00:28.124164 kernel: vgaarb: loaded Apr 21 12:00:28.124175 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 21 12:00:28.124186 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 12:00:28.124194 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 12:00:28.124207 kernel: pnp: PnP ACPI init Apr 21 12:00:28.124215 kernel: pnp: PnP ACPI: found 3 devices Apr 21 12:00:28.124227 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 12:00:28.124238 kernel: NET: Registered PF_INET protocol family Apr 21 12:00:28.124249 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 12:00:28.124259 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 21 12:00:28.124268 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 12:00:28.124280 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 12:00:28.124288 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 21 12:00:28.124300 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 21 12:00:28.124309 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.124320 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.124332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 12:00:28.124340 kernel: NET: Registered PF_XDP protocol family Apr 21 12:00:28.124348 kernel: PCI: CLS 0 bytes, default 64 Apr 21 12:00:28.124356 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 12:00:28.124369 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 21 12:00:28.124378 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 12:00:28.124388 kernel: Initialise system trusted keyrings Apr 21 12:00:28.124398 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 21 12:00:28.124410 kernel: Key type asymmetric registered Apr 21 12:00:28.124421 kernel: Asymmetric key parser 'x509' registered Apr 21 12:00:28.124428 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 12:00:28.124441 kernel: io scheduler mq-deadline registered Apr 21 12:00:28.124449 kernel: io scheduler kyber registered Apr 21 12:00:28.124462 kernel: io scheduler bfq registered Apr 21 12:00:28.124470 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 12:00:28.124481 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 12:00:28.124491 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 12:00:28.124501 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 21 12:00:28.124514 kernel: i8042: PNP: No PS/2 controller found. Apr 21 12:00:28.124683 kernel: rtc_cmos 00:02: registered as rtc0 Apr 21 12:00:28.126854 kernel: rtc_cmos 00:02: setting system clock to 2026-04-21T12:00:27 UTC (1776772827) Apr 21 12:00:28.126997 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 21 12:00:28.127015 kernel: intel_pstate: CPU model not supported Apr 21 12:00:28.127030 kernel: efifb: probing for efifb Apr 21 12:00:28.127045 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 21 12:00:28.127064 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 21 12:00:28.127078 kernel: efifb: scrolling: redraw Apr 21 12:00:28.127093 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 12:00:28.127108 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 12:00:28.127123 kernel: fb0: EFI VGA frame buffer device Apr 21 12:00:28.127139 kernel: pstore: Using crash dump compression: deflate Apr 21 12:00:28.127154 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 12:00:28.127169 kernel: NET: Registered PF_INET6 protocol family Apr 21 12:00:28.127185 kernel: Segment Routing with IPv6 Apr 21 12:00:28.127203 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 12:00:28.127218 kernel: NET: Registered PF_PACKET protocol family Apr 21 12:00:28.127231 kernel: Key type dns_resolver registered Apr 21 12:00:28.127243 kernel: IPI shorthand broadcast: enabled Apr 21 12:00:28.127256 kernel: sched_clock: Marking stable (902002800, 54717000)->(1202549200, -245829400) Apr 21 12:00:28.127270 kernel: registered taskstats version 1 Apr 21 12:00:28.127285 kernel: Loading compiled-in X.509 certificates Apr 21 12:00:28.127300 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 12:00:28.127315 kernel: Key type .fscrypt registered Apr 21 12:00:28.127332 kernel: Key type fscrypt-provisioning registered Apr 21 12:00:28.127347 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 12:00:28.127360 kernel: ima: Allocated hash algorithm: sha1 Apr 21 12:00:28.127372 kernel: ima: No architecture policies found Apr 21 12:00:28.127386 kernel: clk: Disabling unused clocks Apr 21 12:00:28.127400 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 12:00:28.127413 kernel: Write protecting the kernel read-only data: 36864k Apr 21 12:00:28.127425 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 12:00:28.127440 kernel: Run /init as init process Apr 21 12:00:28.127457 kernel: with arguments: Apr 21 12:00:28.127471 kernel: /init Apr 21 12:00:28.127485 kernel: with environment: Apr 21 12:00:28.127499 kernel: HOME=/ Apr 21 12:00:28.127513 kernel: TERM=linux Apr 21 12:00:28.127531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 12:00:28.127547 systemd[1]: Detected virtualization microsoft. Apr 21 12:00:28.127562 systemd[1]: Detected architecture x86-64. Apr 21 12:00:28.127581 systemd[1]: Running in initrd. Apr 21 12:00:28.127596 systemd[1]: No hostname configured, using default hostname. Apr 21 12:00:28.127611 systemd[1]: Hostname set to . Apr 21 12:00:28.127628 systemd[1]: Initializing machine ID from random generator. Apr 21 12:00:28.127643 systemd[1]: Queued start job for default target initrd.target. Apr 21 12:00:28.127659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:00:28.127676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:00:28.127693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 12:00:28.127712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 12:00:28.127745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 12:00:28.127758 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 12:00:28.127776 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 12:00:28.127791 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 12:00:28.127807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:00:28.127824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:00:28.127844 systemd[1]: Reached target paths.target - Path Units. Apr 21 12:00:28.127859 systemd[1]: Reached target slices.target - Slice Units. Apr 21 12:00:28.127875 systemd[1]: Reached target swap.target - Swaps. Apr 21 12:00:28.127891 systemd[1]: Reached target timers.target - Timer Units. Apr 21 12:00:28.127907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:00:28.127922 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:00:28.127938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 12:00:28.127955 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 12:00:28.127971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:00:28.127990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 12:00:28.128006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:00:28.128021 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 12:00:28.128038 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 12:00:28.128054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 12:00:28.128070 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 12:00:28.128086 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 12:00:28.128101 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 12:00:28.128121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 12:00:28.128137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:00:28.128181 systemd-journald[177]: Collecting audit messages is disabled. Apr 21 12:00:28.128217 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 12:00:28.128237 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:00:28.128254 systemd-journald[177]: Journal started Apr 21 12:00:28.128286 systemd-journald[177]: Runtime Journal (/run/log/journal/d891cf2bc5d9488f9eac4c8571461f55) is 8.0M, max 158.7M, 150.7M free. Apr 21 12:00:28.130919 systemd-modules-load[178]: Inserted module 'overlay' Apr 21 12:00:28.139899 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 12:00:28.144548 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 12:00:28.162271 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 12:00:28.171914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 12:00:28.178851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:28.191746 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 12:00:28.192013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 12:00:28.202924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:00:28.206675 kernel: Bridge firewalling registered Apr 21 12:00:28.212798 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 21 12:00:28.215976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 12:00:28.222976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 12:00:28.237946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:00:28.253919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 12:00:28.260115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:00:28.269942 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:00:28.276041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:28.288272 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 12:00:28.295919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 12:00:28.307247 dracut-cmdline[212]: dracut-dracut-053 Apr 21 12:00:28.312756 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:00:28.356283 systemd-resolved[218]: Positive Trust Anchors: Apr 21 12:00:28.357766 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 12:00:28.357826 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 12:00:28.384211 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 21 12:00:28.387978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 12:00:28.394071 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:00:28.404746 kernel: SCSI subsystem initialized Apr 21 12:00:28.414746 kernel: Loading iSCSI transport class v2.0-870. Apr 21 12:00:28.426746 kernel: iscsi: registered transport (tcp) Apr 21 12:00:28.447594 kernel: iscsi: registered transport (qla4xxx) Apr 21 12:00:28.447664 kernel: QLogic iSCSI HBA Driver Apr 21 12:00:28.483459 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 12:00:28.496911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 12:00:28.526873 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 12:00:28.526978 kernel: device-mapper: uevent: version 1.0.3 Apr 21 12:00:28.530413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 12:00:28.570748 kernel: raid6: avx512x4 gen() 18372 MB/s Apr 21 12:00:28.590742 kernel: raid6: avx512x2 gen() 18249 MB/s Apr 21 12:00:28.609735 kernel: raid6: avx512x1 gen() 18527 MB/s Apr 21 12:00:28.628737 kernel: raid6: avx2x4 gen() 18524 MB/s Apr 21 12:00:28.648740 kernel: raid6: avx2x2 gen() 18377 MB/s Apr 21 12:00:28.668790 kernel: raid6: avx2x1 gen() 14040 MB/s Apr 21 12:00:28.668826 kernel: raid6: using algorithm avx512x1 gen() 18527 MB/s Apr 21 12:00:28.691165 kernel: raid6: .... xor() 26681 MB/s, rmw enabled Apr 21 12:00:28.691229 kernel: raid6: using avx512x2 recovery algorithm Apr 21 12:00:28.714760 kernel: xor: automatically using best checksumming function avx Apr 21 12:00:28.861754 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 12:00:28.871611 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:00:28.882923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:00:28.897893 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 21 12:00:28.902516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:00:28.918900 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 12:00:28.933810 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 21 12:00:28.962001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:00:28.971869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 12:00:29.016433 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:00:29.028962 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 12:00:29.051231 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 12:00:29.056159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:00:29.060092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:00:29.074119 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 12:00:29.087898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 12:00:29.109403 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:00:29.129763 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 12:00:29.143358 kernel: hv_vmbus: Vmbus version:5.2 Apr 21 12:00:29.147432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:00:29.147604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:29.159972 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:00:29.163602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:00:29.163835 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:29.177767 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:00:29.186773 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 21 12:00:29.197805 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 21 12:00:29.197849 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 21 12:00:29.197869 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 21 12:00:29.202009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:00:29.214754 kernel: PTP clock support registered Apr 21 12:00:29.225814 kernel: hv_vmbus: registering driver hv_storvsc Apr 21 12:00:29.225855 kernel: hv_utils: Registering HyperV Utility Driver Apr 21 12:00:29.229673 kernel: hv_vmbus: registering driver hv_utils Apr 21 12:00:29.233744 kernel: hv_utils: Heartbeat IC version 3.0 Apr 21 12:00:29.236747 kernel: hv_utils: Shutdown IC version 3.2 Apr 21 12:00:30.329849 kernel: hv_utils: TimeSync IC version 4.0 Apr 21 12:00:30.329880 kernel: hv_vmbus: registering driver hv_netvsc Apr 21 12:00:30.329800 systemd-resolved[218]: Clock change detected. Flushing caches. Apr 21 12:00:30.341125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:30.349005 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:00:30.359866 kernel: scsi host1: storvsc_host_t Apr 21 12:00:30.362859 kernel: scsi host0: storvsc_host_t Apr 21 12:00:30.366854 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 21 12:00:30.366903 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 12:00:30.373438 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 21 12:00:30.373500 kernel: AES CTR mode by8 optimization enabled Apr 21 12:00:30.384847 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 12:00:30.396852 kernel: hv_vmbus: registering driver hid_hyperv Apr 21 12:00:30.396898 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 21 12:00:30.407869 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 21 12:00:30.408535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:30.432247 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 21 12:00:30.432526 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 12:00:30.433850 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 21 12:00:30.453106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:00:30.459694 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 21 12:00:30.460032 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 21 12:00:30.460252 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 12:00:30.462886 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 21 12:00:30.466936 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 21 12:00:30.474853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:30.477856 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 12:00:30.484212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#116 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:00:30.543169 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: VF slot 1 added Apr 21 12:00:30.551845 kernel: hv_vmbus: registering driver hv_pci Apr 21 12:00:30.555841 kernel: hv_pci a689cf78-51b2-406f-ba79-8b2f40bee106: PCI VMBus probing: Using version 0x10004 Apr 21 12:00:30.563127 kernel: hv_pci a689cf78-51b2-406f-ba79-8b2f40bee106: PCI host bridge to bus 51b2:00 Apr 21 12:00:30.563413 kernel: pci_bus 51b2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 21 12:00:30.566609 kernel: pci_bus 51b2:00: No busn resource found for root bus, will use [bus 00-ff] Apr 21 12:00:30.572212 kernel: pci 51b2:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 21 12:00:30.575901 kernel: pci 51b2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:00:30.579860 kernel: pci 51b2:00:02.0: enabling Extended Tags Apr 21 12:00:30.591936 kernel: pci 51b2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 51b2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 21 12:00:30.592182 kernel: pci_bus 51b2:00: busn_res: [bus 00-ff] end is updated to 00 Apr 21 12:00:30.597489 kernel: pci 51b2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:00:30.767246 kernel: mlx5_core 51b2:00:02.0: enabling device (0000 -> 0002) Apr 21 12:00:30.771857 kernel: mlx5_core 51b2:00:02.0: firmware version: 14.30.5026 Apr 21 12:00:30.957372 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 21 12:00:30.995402 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: VF registering: eth1 Apr 21 12:00:30.995722 kernel: mlx5_core 51b2:00:02.0 eth1: joined to eth0 Apr 21 12:00:31.001922 kernel: mlx5_core 51b2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 21 12:00:31.011850 kernel: mlx5_core 51b2:00:02.0 enP20914s1: renamed from eth1 Apr 21 12:00:31.017852 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Apr 21 12:00:31.036141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 12:00:31.094886 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 21 12:00:31.150857 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (443) Apr 21 12:00:31.165012 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 21 12:00:31.169084 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 21 12:00:31.191037 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 12:00:31.209842 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:31.217954 kernel: GPT:disk_guids don't match. Apr 21 12:00:31.218018 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 12:00:31.220492 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:31.232847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:32.233688 disk-uuid[606]: The operation has completed successfully. Apr 21 12:00:32.243659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:32.329796 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 12:00:32.329960 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 12:00:32.367021 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 12:00:32.378402 sh[719]: Success Apr 21 12:00:32.412851 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 12:00:32.722879 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 12:00:32.739948 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 12:00:32.746225 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 12:00:32.768848 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 12:00:32.768902 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:32.774875 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 12:00:32.778034 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 12:00:32.780691 kernel: BTRFS info (device dm-0): using free space tree Apr 21 12:00:33.211768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 12:00:33.218066 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 12:00:33.227997 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 12:00:33.235069 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 12:00:33.254762 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:33.254839 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:33.257697 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:00:33.295853 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:00:33.308211 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 12:00:33.315082 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:33.325174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 12:00:33.339173 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 12:00:33.349581 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:00:33.359290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 12:00:33.383749 systemd-networkd[903]: lo: Link UP Apr 21 12:00:33.383762 systemd-networkd[903]: lo: Gained carrier Apr 21 12:00:33.386029 systemd-networkd[903]: Enumeration completed Apr 21 12:00:33.386303 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 12:00:33.388437 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:00:33.388442 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:00:33.391087 systemd[1]: Reached target network.target - Network. Apr 21 12:00:33.455852 kernel: mlx5_core 51b2:00:02.0 enP20914s1: Link up Apr 21 12:00:33.499035 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: Data path switched to VF: enP20914s1 Apr 21 12:00:33.499385 systemd-networkd[903]: enP20914s1: Link UP Apr 21 12:00:33.499504 systemd-networkd[903]: eth0: Link UP Apr 21 12:00:33.499685 systemd-networkd[903]: eth0: Gained carrier Apr 21 12:00:33.499698 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:00:33.510998 systemd-networkd[903]: enP20914s1: Gained carrier Apr 21 12:00:33.545875 systemd-networkd[903]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:00:34.325801 ignition[890]: Ignition 2.19.0 Apr 21 12:00:34.325817 ignition[890]: Stage: fetch-offline Apr 21 12:00:34.325873 ignition[890]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.325883 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.325988 ignition[890]: parsed url from cmdline: "" Apr 21 12:00:34.325993 ignition[890]: no config URL provided Apr 21 12:00:34.325999 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:00:34.326009 ignition[890]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:00:34.326017 ignition[890]: failed to fetch config: resource requires networking Apr 21 12:00:34.326230 ignition[890]: Ignition finished successfully Apr 21 12:00:34.350557 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:00:34.364987 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 12:00:34.387413 ignition[911]: Ignition 2.19.0 Apr 21 12:00:34.387426 ignition[911]: Stage: fetch Apr 21 12:00:34.387651 ignition[911]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.387665 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.388117 ignition[911]: parsed url from cmdline: "" Apr 21 12:00:34.388121 ignition[911]: no config URL provided Apr 21 12:00:34.388126 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:00:34.388134 ignition[911]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:00:34.388494 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 21 12:00:34.509271 ignition[911]: GET result: OK Apr 21 12:00:34.509385 ignition[911]: config has been read from IMDS userdata Apr 21 12:00:34.509416 ignition[911]: parsing config with SHA512: e8da097681918847ea9d3de87fdbba1f3ac2d449b47c8aff4a8abb8c07a2f7368f24c4e88ba2b0addb9a5615f35070d70f72e57b7ed95acb4f42e9229af73b70 Apr 21 12:00:34.513362 unknown[911]: fetched base config from "system" Apr 21 12:00:34.513758 ignition[911]: fetch: fetch complete Apr 21 12:00:34.513369 unknown[911]: fetched base config from "system" Apr 21 12:00:34.513762 ignition[911]: fetch: fetch passed Apr 21 12:00:34.513374 unknown[911]: fetched user config from "azure" Apr 21 12:00:34.513813 ignition[911]: Ignition finished successfully Apr 21 12:00:34.530511 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 12:00:34.540020 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 12:00:34.555951 systemd-networkd[903]: eth0: Gained IPv6LL Apr 21 12:00:34.557866 ignition[917]: Ignition 2.19.0 Apr 21 12:00:34.557872 ignition[917]: Stage: kargs Apr 21 12:00:34.558043 ignition[917]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.558052 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.566295 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 12:00:34.559360 ignition[917]: kargs: kargs passed Apr 21 12:00:34.559409 ignition[917]: Ignition finished successfully Apr 21 12:00:34.589039 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 12:00:34.605013 ignition[923]: Ignition 2.19.0 Apr 21 12:00:34.605025 ignition[923]: Stage: disks Apr 21 12:00:34.607051 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 12:00:34.605256 ignition[923]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.611012 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 12:00:34.605271 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.616409 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 12:00:34.606137 ignition[923]: disks: disks passed Apr 21 12:00:34.626413 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 12:00:34.606192 ignition[923]: Ignition finished successfully Apr 21 12:00:34.636553 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 12:00:34.653595 systemd[1]: Reached target basic.target - Basic System. Apr 21 12:00:34.664975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 12:00:34.739209 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 21 12:00:34.747408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 12:00:34.762999 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 12:00:34.858844 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 12:00:34.859387 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 12:00:34.862810 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 12:00:34.901993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:00:34.920849 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Apr 21 12:00:34.930882 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:34.930958 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:34.930986 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:00:34.932989 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 12:00:34.945031 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:00:34.943027 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 12:00:34.951924 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 12:00:34.951962 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:00:34.957124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:00:34.964176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 12:00:34.976035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 12:00:35.682717 coreos-metadata[959]: Apr 21 12:00:35.682 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 12:00:35.688774 coreos-metadata[959]: Apr 21 12:00:35.688 INFO Fetch successful Apr 21 12:00:35.692179 coreos-metadata[959]: Apr 21 12:00:35.691 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 21 12:00:35.709298 coreos-metadata[959]: Apr 21 12:00:35.709 INFO Fetch successful Apr 21 12:00:35.730180 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 12:00:35.734309 coreos-metadata[959]: Apr 21 12:00:35.729 INFO wrote hostname ci-4081.3.7-a-a89817d5a7 to /sysroot/etc/hostname Apr 21 12:00:35.739602 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:00:35.749077 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Apr 21 12:00:35.756791 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 12:00:35.762731 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 12:00:36.784867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 12:00:36.795983 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 12:00:36.804985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 12:00:36.811934 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:36.815043 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 12:00:36.841819 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 12:00:36.851269 ignition[1064]: INFO : Ignition 2.19.0 Apr 21 12:00:36.851269 ignition[1064]: INFO : Stage: mount Apr 21 12:00:36.856104 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:36.856104 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:36.856104 ignition[1064]: INFO : mount: mount passed Apr 21 12:00:36.856104 ignition[1064]: INFO : Ignition finished successfully Apr 21 12:00:36.859158 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 12:00:36.877038 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 12:00:36.886083 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:00:36.908942 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Apr 21 12:00:36.908998 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:36.913842 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:36.918519 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:00:36.925848 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:00:36.928049 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:00:36.958980 ignition[1094]: INFO : Ignition 2.19.0 Apr 21 12:00:36.958980 ignition[1094]: INFO : Stage: files Apr 21 12:00:36.964277 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:36.964277 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:36.964277 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Apr 21 12:00:36.964277 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 12:00:36.964277 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 12:00:37.057203 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 12:00:37.061580 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 12:00:37.061580 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 12:00:37.057860 unknown[1094]: wrote ssh authorized keys file for user: core Apr 21 12:00:37.095320 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:00:37.101334 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 12:00:37.200770 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 12:00:37.397571 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:00:37.397571 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 21 12:00:37.999987 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 12:00:39.000627 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:39.000627 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 12:00:39.022149 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: files passed Apr 21 12:00:39.031042 ignition[1094]: INFO : Ignition finished successfully Apr 21 12:00:39.027100 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 12:00:39.056214 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 12:00:39.074243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 12:00:39.086208 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 12:00:39.086337 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 12:00:39.100151 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:00:39.100151 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:00:39.113935 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:00:39.103878 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:00:39.109324 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 12:00:39.134009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 12:00:39.159145 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 12:00:39.159280 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 12:00:39.170049 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 12:00:39.173210 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 12:00:39.179390 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 12:00:39.192062 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 12:00:39.206814 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:00:39.217999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 12:00:39.234805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:00:39.235048 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:00:39.235586 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 12:00:39.236657 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 12:00:39.236802 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:00:39.237713 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 12:00:39.238256 systemd[1]: Stopped target basic.target - Basic System. Apr 21 12:00:39.238748 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 12:00:39.239341 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:00:39.239864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 12:00:39.240380 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 12:00:39.240894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:00:39.241441 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 12:00:39.241952 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 12:00:39.242417 systemd[1]: Stopped target swap.target - Swaps. Apr 21 12:00:39.242912 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 12:00:39.243046 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:00:39.244249 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:00:39.244741 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:00:39.245695 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 12:00:39.290723 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:00:39.294378 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 12:00:39.294556 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 12:00:39.300717 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 12:00:39.300917 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:00:39.366744 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 12:00:39.369418 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 12:00:39.374635 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 12:00:39.377449 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:00:39.394057 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 12:00:39.399017 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 12:00:39.399214 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:00:39.411640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 12:00:39.414578 ignition[1146]: INFO : Ignition 2.19.0 Apr 21 12:00:39.414578 ignition[1146]: INFO : Stage: umount Apr 21 12:00:39.414578 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:39.414578 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:39.428957 ignition[1146]: INFO : umount: umount passed Apr 21 12:00:39.428957 ignition[1146]: INFO : Ignition finished successfully Apr 21 12:00:39.419295 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 12:00:39.419516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:00:39.431499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 12:00:39.431663 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:00:39.459227 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 12:00:39.459359 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 12:00:39.468130 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 12:00:39.471069 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 12:00:39.476187 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 12:00:39.476258 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 12:00:39.485571 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 12:00:39.485644 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 12:00:39.491216 systemd[1]: Stopped target network.target - Network. Apr 21 12:00:39.493518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 12:00:39.493589 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:00:39.506970 systemd[1]: Stopped target paths.target - Path Units. Apr 21 12:00:39.511655 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 12:00:39.511742 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:00:39.518515 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 12:00:39.521097 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 12:00:39.530366 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 12:00:39.530425 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:00:39.531031 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 12:00:39.531073 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:00:39.531456 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 12:00:39.531505 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 12:00:39.531934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 12:00:39.531975 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 12:00:39.532578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 12:00:39.532931 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 12:00:39.533596 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 12:00:39.533808 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 12:00:39.544172 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 12:00:39.559217 systemd-networkd[903]: eth0: DHCPv6 lease lost Apr 21 12:00:39.559220 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 12:00:39.559389 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 12:00:39.565102 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 12:00:39.565245 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 12:00:39.571713 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 12:00:39.571772 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:00:39.589000 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 12:00:39.591322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 12:00:39.591391 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:00:39.665917 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: Data path switched from VF: enP20914s1 Apr 21 12:00:39.591519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 12:00:39.591560 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:00:39.592393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 12:00:39.592428 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 12:00:39.592842 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 12:00:39.592877 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:00:39.593760 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:00:39.626578 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 12:00:39.626748 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:00:39.635616 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 12:00:39.635672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 12:00:39.641066 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 12:00:39.641105 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:00:39.644325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 12:00:39.644383 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:00:39.651803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 12:00:39.651873 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 12:00:39.661883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:00:39.661937 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:39.684015 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 12:00:39.693140 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 12:00:39.693212 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:00:39.699337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:00:39.699397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:39.749663 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 12:00:39.749792 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 12:00:39.758911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 12:00:39.761993 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 12:00:42.058302 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 12:00:42.058444 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 12:00:42.062391 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 12:00:42.067495 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 12:00:42.067579 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 12:00:42.085151 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 12:00:44.954733 systemd[1]: Switching root. Apr 21 12:00:45.035021 systemd-journald[177]: Journal stopped Apr 21 12:00:28.117460 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 12:00:28.117485 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:00:28.117497 kernel: BIOS-provided physical RAM map: Apr 21 12:00:28.117504 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 12:00:28.117514 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 21 12:00:28.117520 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 21 12:00:28.117527 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 21 12:00:28.117533 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 21 12:00:28.117546 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 21 12:00:28.117553 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 21 12:00:28.117559 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 21 12:00:28.117569 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 21 12:00:28.117576 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 21 12:00:28.117582 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 21 12:00:28.117595 kernel: printk: bootconsole [earlyser0] enabled Apr 21 12:00:28.117604 kernel: NX (Execute Disable) protection: active Apr 21 12:00:28.117610 kernel: APIC: Static calls initialized Apr 21 12:00:28.117621 kernel: efi: EFI v2.7 by Microsoft Apr 21 12:00:28.117629 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f41e798 Apr 21 12:00:28.117636 kernel: SMBIOS 3.1.0 present. Apr 21 12:00:28.117647 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 21 12:00:28.117654 kernel: Hypervisor detected: Microsoft Hyper-V Apr 21 12:00:28.117663 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 21 12:00:28.117672 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 21 12:00:28.117679 kernel: Hyper-V: Nested features: 0x1e0101 Apr 21 12:00:28.117692 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 21 12:00:28.117701 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 21 12:00:28.117710 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:00:28.117721 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 21 12:00:28.117742 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 21 12:00:28.117750 kernel: tsc: Detected 2593.906 MHz processor Apr 21 12:00:28.117758 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 12:00:28.117770 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 12:00:28.117777 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 21 12:00:28.117790 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 12:00:28.117798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 12:00:28.117807 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 21 12:00:28.117816 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 21 12:00:28.117823 kernel: Using GB pages for direct mapping Apr 21 12:00:28.117834 kernel: Secure boot disabled Apr 21 12:00:28.117845 kernel: ACPI: Early table checksum verification disabled Apr 21 12:00:28.117860 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 21 12:00:28.117867 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117879 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117888 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 21 12:00:28.117897 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 21 12:00:28.117907 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117914 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117928 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117936 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117947 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117955 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 21 12:00:28.117965 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 21 12:00:28.117974 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 21 12:00:28.117982 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 21 12:00:28.117994 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 21 12:00:28.118001 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 21 12:00:28.118016 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 21 12:00:28.118024 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 21 12:00:28.118033 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 21 12:00:28.118043 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 21 12:00:28.118050 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 12:00:28.118062 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 12:00:28.118069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 21 12:00:28.118079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 21 12:00:28.118089 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 21 12:00:28.118101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 21 12:00:28.118111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 21 12:00:28.118123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 21 12:00:28.118131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 21 12:00:28.118142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 21 12:00:28.118154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 21 12:00:28.118162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 21 12:00:28.118169 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 21 12:00:28.118183 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 21 12:00:28.118191 kernel: Zone ranges: Apr 21 12:00:28.118201 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 12:00:28.118210 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 12:00:28.118218 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:00:28.118229 kernel: Movable zone start for each node Apr 21 12:00:28.118237 kernel: Early memory node ranges Apr 21 12:00:28.118247 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 12:00:28.118256 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 21 12:00:28.118268 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 21 12:00:28.118278 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 21 12:00:28.118285 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 21 12:00:28.118297 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 21 12:00:28.118305 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 12:00:28.118315 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 12:00:28.118324 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 21 12:00:28.118336 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 21 12:00:28.118344 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 21 12:00:28.118357 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 21 12:00:28.118369 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 21 12:00:28.118377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 12:00:28.118389 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 12:00:28.118405 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 21 12:00:28.118415 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 12:00:28.118427 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 21 12:00:28.118434 kernel: Booting paravirtualized kernel on Hyper-V Apr 21 12:00:28.118442 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 12:00:28.118452 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 12:00:28.118460 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 12:00:28.118467 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 12:00:28.118474 kernel: pcpu-alloc: [0] 0 1 Apr 21 12:00:28.118481 kernel: Hyper-V: PV spinlocks enabled Apr 21 12:00:28.118491 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 12:00:28.118502 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:00:28.118510 kernel: random: crng init done Apr 21 12:00:28.118523 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 21 12:00:28.118532 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 12:00:28.118540 kernel: Fallback order for Node 0: 0 Apr 21 12:00:28.118551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 21 12:00:28.118562 kernel: Policy zone: Normal Apr 21 12:00:28.118571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 12:00:28.118583 kernel: software IO TLB: area num 2. Apr 21 12:00:28.118594 kernel: Memory: 8065972K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316996K reserved, 0K cma-reserved) Apr 21 12:00:28.118604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 12:00:28.118624 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 12:00:28.118634 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 12:00:28.118641 kernel: Dynamic Preempt: voluntary Apr 21 12:00:28.118655 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 12:00:28.118664 kernel: rcu: RCU event tracing is enabled. Apr 21 12:00:28.118677 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 12:00:28.118685 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 12:00:28.118697 kernel: Rude variant of Tasks RCU enabled. Apr 21 12:00:28.118706 kernel: Tracing variant of Tasks RCU enabled. Apr 21 12:00:28.118719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 12:00:28.122780 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 12:00:28.122806 kernel: Using NULL legacy PIC Apr 21 12:00:28.122826 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 21 12:00:28.122846 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 12:00:28.122863 kernel: Console: colour dummy device 80x25 Apr 21 12:00:28.122878 kernel: printk: console [tty1] enabled Apr 21 12:00:28.122894 kernel: printk: console [ttyS0] enabled Apr 21 12:00:28.122917 kernel: printk: bootconsole [earlyser0] disabled Apr 21 12:00:28.122933 kernel: ACPI: Core revision 20230628 Apr 21 12:00:28.122950 kernel: Failed to register legacy timer interrupt Apr 21 12:00:28.122969 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 12:00:28.122986 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 21 12:00:28.123004 kernel: Hyper-V: Using IPI hypercalls Apr 21 12:00:28.123020 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 21 12:00:28.123039 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 21 12:00:28.123056 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 21 12:00:28.123076 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 21 12:00:28.123092 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 21 12:00:28.123111 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 21 12:00:28.123127 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Apr 21 12:00:28.123142 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 12:00:28.123158 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 12:00:28.123175 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 12:00:28.123191 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 12:00:28.123207 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 12:00:28.123226 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 12:00:28.123250 kernel: RETBleed: Vulnerable Apr 21 12:00:28.123266 kernel: Speculative Store Bypass: Vulnerable Apr 21 12:00:28.123283 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:00:28.123298 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 12:00:28.123314 kernel: active return thunk: its_return_thunk Apr 21 12:00:28.123331 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 12:00:28.123346 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 12:00:28.123363 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 12:00:28.123382 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 12:00:28.123400 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 12:00:28.123420 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 12:00:28.123437 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 12:00:28.123452 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 12:00:28.123467 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 12:00:28.123479 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 12:00:28.123492 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 12:00:28.123504 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 12:00:28.123516 kernel: Freeing SMP alternatives memory: 32K Apr 21 12:00:28.123528 kernel: pid_max: default: 32768 minimum: 301 Apr 21 12:00:28.123540 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 12:00:28.123550 kernel: landlock: Up and running. Apr 21 12:00:28.123558 kernel: SELinux: Initializing. Apr 21 12:00:28.123573 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.123584 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.123595 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 12:00:28.123605 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:00:28.123617 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:00:28.123628 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 12:00:28.123639 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 12:00:28.123647 kernel: signal: max sigframe size: 3632 Apr 21 12:00:28.123655 kernel: rcu: Hierarchical SRCU implementation. Apr 21 12:00:28.123671 kernel: rcu: Max phase no-delay instances is 400. Apr 21 12:00:28.123679 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 12:00:28.123687 kernel: smp: Bringing up secondary CPUs ... Apr 21 12:00:28.123695 kernel: smpboot: x86: Booting SMP configuration: Apr 21 12:00:28.123705 kernel: .... node #0, CPUs: #1 Apr 21 12:00:28.123716 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 21 12:00:28.123740 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 12:00:28.123753 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 12:00:28.123761 kernel: smpboot: Max logical packages: 1 Apr 21 12:00:28.123777 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 21 12:00:28.123785 kernel: devtmpfs: initialized Apr 21 12:00:28.123796 kernel: x86/mm: Memory block size: 128MB Apr 21 12:00:28.123806 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 21 12:00:28.123816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 12:00:28.123827 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 12:00:28.123835 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 12:00:28.123843 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 12:00:28.123851 kernel: audit: initializing netlink subsys (disabled) Apr 21 12:00:28.123861 kernel: audit: type=2000 audit(1776772827.030:1): state=initialized audit_enabled=0 res=1 Apr 21 12:00:28.123874 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 12:00:28.123882 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 12:00:28.123892 kernel: cpuidle: using governor menu Apr 21 12:00:28.123902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 12:00:28.123910 kernel: dca service started, version 1.12.1 Apr 21 12:00:28.123922 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 21 12:00:28.123932 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 21 12:00:28.123940 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 12:00:28.123955 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 12:00:28.123964 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 12:00:28.123974 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 12:00:28.123984 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 12:00:28.123992 kernel: ACPI: Added _OSI(Module Device) Apr 21 12:00:28.124006 kernel: ACPI: Added _OSI(Processor Device) Apr 21 12:00:28.124014 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 12:00:28.124025 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 12:00:28.124037 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 12:00:28.124047 kernel: ACPI: Interpreter enabled Apr 21 12:00:28.124058 kernel: ACPI: PM: (supports S0 S5) Apr 21 12:00:28.124066 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 12:00:28.124079 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 12:00:28.124087 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 21 12:00:28.124099 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 21 12:00:28.124108 kernel: iommu: Default domain type: Translated Apr 21 12:00:28.124121 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 12:00:28.124132 kernel: efivars: Registered efivars operations Apr 21 12:00:28.124144 kernel: PCI: Using ACPI for IRQ routing Apr 21 12:00:28.124157 kernel: PCI: System does not support PCI Apr 21 12:00:28.124164 kernel: vgaarb: loaded Apr 21 12:00:28.124175 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 21 12:00:28.124186 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 12:00:28.124194 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 12:00:28.124207 kernel: pnp: PnP ACPI init Apr 21 12:00:28.124215 kernel: pnp: PnP ACPI: found 3 devices Apr 21 12:00:28.124227 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 12:00:28.124238 kernel: NET: Registered PF_INET protocol family Apr 21 12:00:28.124249 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 12:00:28.124259 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 21 12:00:28.124268 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 12:00:28.124280 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 12:00:28.124288 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 21 12:00:28.124300 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 21 12:00:28.124309 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.124320 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 21 12:00:28.124332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 12:00:28.124340 kernel: NET: Registered PF_XDP protocol family Apr 21 12:00:28.124348 kernel: PCI: CLS 0 bytes, default 64 Apr 21 12:00:28.124356 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 12:00:28.124369 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 21 12:00:28.124378 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 12:00:28.124388 kernel: Initialise system trusted keyrings Apr 21 12:00:28.124398 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 21 12:00:28.124410 kernel: Key type asymmetric registered Apr 21 12:00:28.124421 kernel: Asymmetric key parser 'x509' registered Apr 21 12:00:28.124428 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 12:00:28.124441 kernel: io scheduler mq-deadline registered Apr 21 12:00:28.124449 kernel: io scheduler kyber registered Apr 21 12:00:28.124462 kernel: io scheduler bfq registered Apr 21 12:00:28.124470 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 12:00:28.124481 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 12:00:28.124491 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 12:00:28.124501 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 21 12:00:28.124514 kernel: i8042: PNP: No PS/2 controller found. Apr 21 12:00:28.124683 kernel: rtc_cmos 00:02: registered as rtc0 Apr 21 12:00:28.126854 kernel: rtc_cmos 00:02: setting system clock to 2026-04-21T12:00:27 UTC (1776772827) Apr 21 12:00:28.126997 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 21 12:00:28.127015 kernel: intel_pstate: CPU model not supported Apr 21 12:00:28.127030 kernel: efifb: probing for efifb Apr 21 12:00:28.127045 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 21 12:00:28.127064 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 21 12:00:28.127078 kernel: efifb: scrolling: redraw Apr 21 12:00:28.127093 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 12:00:28.127108 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 12:00:28.127123 kernel: fb0: EFI VGA frame buffer device Apr 21 12:00:28.127139 kernel: pstore: Using crash dump compression: deflate Apr 21 12:00:28.127154 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 12:00:28.127169 kernel: NET: Registered PF_INET6 protocol family Apr 21 12:00:28.127185 kernel: Segment Routing with IPv6 Apr 21 12:00:28.127203 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 12:00:28.127218 kernel: NET: Registered PF_PACKET protocol family Apr 21 12:00:28.127231 kernel: Key type dns_resolver registered Apr 21 12:00:28.127243 kernel: IPI shorthand broadcast: enabled Apr 21 12:00:28.127256 kernel: sched_clock: Marking stable (902002800, 54717000)->(1202549200, -245829400) Apr 21 12:00:28.127270 kernel: registered taskstats version 1 Apr 21 12:00:28.127285 kernel: Loading compiled-in X.509 certificates Apr 21 12:00:28.127300 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 12:00:28.127315 kernel: Key type .fscrypt registered Apr 21 12:00:28.127332 kernel: Key type fscrypt-provisioning registered Apr 21 12:00:28.127347 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 12:00:28.127360 kernel: ima: Allocated hash algorithm: sha1 Apr 21 12:00:28.127372 kernel: ima: No architecture policies found Apr 21 12:00:28.127386 kernel: clk: Disabling unused clocks Apr 21 12:00:28.127400 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 12:00:28.127413 kernel: Write protecting the kernel read-only data: 36864k Apr 21 12:00:28.127425 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 12:00:28.127440 kernel: Run /init as init process Apr 21 12:00:28.127457 kernel: with arguments: Apr 21 12:00:28.127471 kernel: /init Apr 21 12:00:28.127485 kernel: with environment: Apr 21 12:00:28.127499 kernel: HOME=/ Apr 21 12:00:28.127513 kernel: TERM=linux Apr 21 12:00:28.127531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 12:00:28.127547 systemd[1]: Detected virtualization microsoft. Apr 21 12:00:28.127562 systemd[1]: Detected architecture x86-64. Apr 21 12:00:28.127581 systemd[1]: Running in initrd. Apr 21 12:00:28.127596 systemd[1]: No hostname configured, using default hostname. Apr 21 12:00:28.127611 systemd[1]: Hostname set to . Apr 21 12:00:28.127628 systemd[1]: Initializing machine ID from random generator. Apr 21 12:00:28.127643 systemd[1]: Queued start job for default target initrd.target. Apr 21 12:00:28.127659 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:00:28.127676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:00:28.127693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 12:00:28.127712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 12:00:28.127745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 12:00:28.127758 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 12:00:28.127776 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 12:00:28.127791 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 12:00:28.127807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:00:28.127824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:00:28.127844 systemd[1]: Reached target paths.target - Path Units. Apr 21 12:00:28.127859 systemd[1]: Reached target slices.target - Slice Units. Apr 21 12:00:28.127875 systemd[1]: Reached target swap.target - Swaps. Apr 21 12:00:28.127891 systemd[1]: Reached target timers.target - Timer Units. Apr 21 12:00:28.127907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:00:28.127922 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:00:28.127938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 12:00:28.127955 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 12:00:28.127971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:00:28.127990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 12:00:28.128006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:00:28.128021 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 12:00:28.128038 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 12:00:28.128054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 12:00:28.128070 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 12:00:28.128086 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 12:00:28.128101 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 12:00:28.128121 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 12:00:28.128137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:00:28.128181 systemd-journald[177]: Collecting audit messages is disabled. Apr 21 12:00:28.128217 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 12:00:28.128237 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:00:28.128254 systemd-journald[177]: Journal started Apr 21 12:00:28.128286 systemd-journald[177]: Runtime Journal (/run/log/journal/d891cf2bc5d9488f9eac4c8571461f55) is 8.0M, max 158.7M, 150.7M free. Apr 21 12:00:28.130919 systemd-modules-load[178]: Inserted module 'overlay' Apr 21 12:00:28.139899 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 12:00:28.144548 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 12:00:28.162271 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 12:00:28.171914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 12:00:28.178851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:28.191746 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 12:00:28.192013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 12:00:28.202924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:00:28.206675 kernel: Bridge firewalling registered Apr 21 12:00:28.212798 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 21 12:00:28.215976 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 12:00:28.222976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 12:00:28.237946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:00:28.253919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 12:00:28.260115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:00:28.269942 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:00:28.276041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:28.288272 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 12:00:28.295919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 12:00:28.307247 dracut-cmdline[212]: dracut-dracut-053 Apr 21 12:00:28.312756 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 12:00:28.356283 systemd-resolved[218]: Positive Trust Anchors: Apr 21 12:00:28.357766 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 12:00:28.357826 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 12:00:28.384211 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 21 12:00:28.387978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 12:00:28.394071 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:00:28.404746 kernel: SCSI subsystem initialized Apr 21 12:00:28.414746 kernel: Loading iSCSI transport class v2.0-870. Apr 21 12:00:28.426746 kernel: iscsi: registered transport (tcp) Apr 21 12:00:28.447594 kernel: iscsi: registered transport (qla4xxx) Apr 21 12:00:28.447664 kernel: QLogic iSCSI HBA Driver Apr 21 12:00:28.483459 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 12:00:28.496911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 12:00:28.526873 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 12:00:28.526978 kernel: device-mapper: uevent: version 1.0.3 Apr 21 12:00:28.530413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 12:00:28.570748 kernel: raid6: avx512x4 gen() 18372 MB/s Apr 21 12:00:28.590742 kernel: raid6: avx512x2 gen() 18249 MB/s Apr 21 12:00:28.609735 kernel: raid6: avx512x1 gen() 18527 MB/s Apr 21 12:00:28.628737 kernel: raid6: avx2x4 gen() 18524 MB/s Apr 21 12:00:28.648740 kernel: raid6: avx2x2 gen() 18377 MB/s Apr 21 12:00:28.668790 kernel: raid6: avx2x1 gen() 14040 MB/s Apr 21 12:00:28.668826 kernel: raid6: using algorithm avx512x1 gen() 18527 MB/s Apr 21 12:00:28.691165 kernel: raid6: .... xor() 26681 MB/s, rmw enabled Apr 21 12:00:28.691229 kernel: raid6: using avx512x2 recovery algorithm Apr 21 12:00:28.714760 kernel: xor: automatically using best checksumming function avx Apr 21 12:00:28.861754 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 12:00:28.871611 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:00:28.882923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:00:28.897893 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 21 12:00:28.902516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:00:28.918900 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 12:00:28.933810 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 21 12:00:28.962001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:00:28.971869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 12:00:29.016433 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:00:29.028962 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 12:00:29.051231 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 12:00:29.056159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:00:29.060092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:00:29.074119 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 12:00:29.087898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 12:00:29.109403 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:00:29.129763 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 12:00:29.143358 kernel: hv_vmbus: Vmbus version:5.2 Apr 21 12:00:29.147432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:00:29.147604 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:29.159972 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:00:29.163602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:00:29.163835 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:29.177767 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:00:29.186773 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 21 12:00:29.197805 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 21 12:00:29.197849 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 21 12:00:29.197869 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 21 12:00:29.202009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:00:29.214754 kernel: PTP clock support registered Apr 21 12:00:29.225814 kernel: hv_vmbus: registering driver hv_storvsc Apr 21 12:00:29.225855 kernel: hv_utils: Registering HyperV Utility Driver Apr 21 12:00:29.229673 kernel: hv_vmbus: registering driver hv_utils Apr 21 12:00:29.233744 kernel: hv_utils: Heartbeat IC version 3.0 Apr 21 12:00:29.236747 kernel: hv_utils: Shutdown IC version 3.2 Apr 21 12:00:30.329849 kernel: hv_utils: TimeSync IC version 4.0 Apr 21 12:00:30.329880 kernel: hv_vmbus: registering driver hv_netvsc Apr 21 12:00:30.329800 systemd-resolved[218]: Clock change detected. Flushing caches. Apr 21 12:00:30.341125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:30.349005 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 12:00:30.359866 kernel: scsi host1: storvsc_host_t Apr 21 12:00:30.362859 kernel: scsi host0: storvsc_host_t Apr 21 12:00:30.366854 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 21 12:00:30.366903 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 12:00:30.373438 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 21 12:00:30.373500 kernel: AES CTR mode by8 optimization enabled Apr 21 12:00:30.384847 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 12:00:30.396852 kernel: hv_vmbus: registering driver hid_hyperv Apr 21 12:00:30.396898 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 21 12:00:30.407869 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 21 12:00:30.408535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:30.432247 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 21 12:00:30.432526 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 12:00:30.433850 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 21 12:00:30.453106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:00:30.459694 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 21 12:00:30.460032 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 21 12:00:30.460252 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 12:00:30.462886 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 21 12:00:30.466936 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 21 12:00:30.474853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:30.477856 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 12:00:30.484212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#116 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:00:30.543169 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: VF slot 1 added Apr 21 12:00:30.551845 kernel: hv_vmbus: registering driver hv_pci Apr 21 12:00:30.555841 kernel: hv_pci a689cf78-51b2-406f-ba79-8b2f40bee106: PCI VMBus probing: Using version 0x10004 Apr 21 12:00:30.563127 kernel: hv_pci a689cf78-51b2-406f-ba79-8b2f40bee106: PCI host bridge to bus 51b2:00 Apr 21 12:00:30.563413 kernel: pci_bus 51b2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 21 12:00:30.566609 kernel: pci_bus 51b2:00: No busn resource found for root bus, will use [bus 00-ff] Apr 21 12:00:30.572212 kernel: pci 51b2:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 21 12:00:30.575901 kernel: pci 51b2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:00:30.579860 kernel: pci 51b2:00:02.0: enabling Extended Tags Apr 21 12:00:30.591936 kernel: pci 51b2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 51b2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 21 12:00:30.592182 kernel: pci_bus 51b2:00: busn_res: [bus 00-ff] end is updated to 00 Apr 21 12:00:30.597489 kernel: pci 51b2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 21 12:00:30.767246 kernel: mlx5_core 51b2:00:02.0: enabling device (0000 -> 0002) Apr 21 12:00:30.771857 kernel: mlx5_core 51b2:00:02.0: firmware version: 14.30.5026 Apr 21 12:00:30.957372 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 21 12:00:30.995402 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: VF registering: eth1 Apr 21 12:00:30.995722 kernel: mlx5_core 51b2:00:02.0 eth1: joined to eth0 Apr 21 12:00:31.001922 kernel: mlx5_core 51b2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 21 12:00:31.011850 kernel: mlx5_core 51b2:00:02.0 enP20914s1: renamed from eth1 Apr 21 12:00:31.017852 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Apr 21 12:00:31.036141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 12:00:31.094886 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 21 12:00:31.150857 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (443) Apr 21 12:00:31.165012 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 21 12:00:31.169084 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 21 12:00:31.191037 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 12:00:31.209842 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:31.217954 kernel: GPT:disk_guids don't match. Apr 21 12:00:31.218018 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 12:00:31.220492 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:31.232847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:32.233688 disk-uuid[606]: The operation has completed successfully. Apr 21 12:00:32.243659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 12:00:32.329796 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 12:00:32.329960 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 12:00:32.367021 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 12:00:32.378402 sh[719]: Success Apr 21 12:00:32.412851 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 12:00:32.722879 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 12:00:32.739948 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 12:00:32.746225 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 12:00:32.768848 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 12:00:32.768902 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:32.774875 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 12:00:32.778034 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 12:00:32.780691 kernel: BTRFS info (device dm-0): using free space tree Apr 21 12:00:33.211768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 12:00:33.218066 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 12:00:33.227997 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 12:00:33.235069 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 12:00:33.254762 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:33.254839 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:33.257697 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:00:33.295853 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:00:33.308211 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 12:00:33.315082 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:33.325174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 12:00:33.339173 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 12:00:33.349581 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:00:33.359290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 12:00:33.383749 systemd-networkd[903]: lo: Link UP Apr 21 12:00:33.383762 systemd-networkd[903]: lo: Gained carrier Apr 21 12:00:33.386029 systemd-networkd[903]: Enumeration completed Apr 21 12:00:33.386303 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 12:00:33.388437 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:00:33.388442 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:00:33.391087 systemd[1]: Reached target network.target - Network. Apr 21 12:00:33.455852 kernel: mlx5_core 51b2:00:02.0 enP20914s1: Link up Apr 21 12:00:33.499035 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: Data path switched to VF: enP20914s1 Apr 21 12:00:33.499385 systemd-networkd[903]: enP20914s1: Link UP Apr 21 12:00:33.499504 systemd-networkd[903]: eth0: Link UP Apr 21 12:00:33.499685 systemd-networkd[903]: eth0: Gained carrier Apr 21 12:00:33.499698 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:00:33.510998 systemd-networkd[903]: enP20914s1: Gained carrier Apr 21 12:00:33.545875 systemd-networkd[903]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:00:34.325801 ignition[890]: Ignition 2.19.0 Apr 21 12:00:34.325817 ignition[890]: Stage: fetch-offline Apr 21 12:00:34.325873 ignition[890]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.325883 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.325988 ignition[890]: parsed url from cmdline: "" Apr 21 12:00:34.325993 ignition[890]: no config URL provided Apr 21 12:00:34.325999 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:00:34.326009 ignition[890]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:00:34.326017 ignition[890]: failed to fetch config: resource requires networking Apr 21 12:00:34.326230 ignition[890]: Ignition finished successfully Apr 21 12:00:34.350557 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:00:34.364987 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 12:00:34.387413 ignition[911]: Ignition 2.19.0 Apr 21 12:00:34.387426 ignition[911]: Stage: fetch Apr 21 12:00:34.387651 ignition[911]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.387665 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.388117 ignition[911]: parsed url from cmdline: "" Apr 21 12:00:34.388121 ignition[911]: no config URL provided Apr 21 12:00:34.388126 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 12:00:34.388134 ignition[911]: no config at "/usr/lib/ignition/user.ign" Apr 21 12:00:34.388494 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 21 12:00:34.509271 ignition[911]: GET result: OK Apr 21 12:00:34.509385 ignition[911]: config has been read from IMDS userdata Apr 21 12:00:34.509416 ignition[911]: parsing config with SHA512: e8da097681918847ea9d3de87fdbba1f3ac2d449b47c8aff4a8abb8c07a2f7368f24c4e88ba2b0addb9a5615f35070d70f72e57b7ed95acb4f42e9229af73b70 Apr 21 12:00:34.513362 unknown[911]: fetched base config from "system" Apr 21 12:00:34.513758 ignition[911]: fetch: fetch complete Apr 21 12:00:34.513369 unknown[911]: fetched base config from "system" Apr 21 12:00:34.513762 ignition[911]: fetch: fetch passed Apr 21 12:00:34.513374 unknown[911]: fetched user config from "azure" Apr 21 12:00:34.513813 ignition[911]: Ignition finished successfully Apr 21 12:00:34.530511 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 12:00:34.540020 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 12:00:34.555951 systemd-networkd[903]: eth0: Gained IPv6LL Apr 21 12:00:34.557866 ignition[917]: Ignition 2.19.0 Apr 21 12:00:34.557872 ignition[917]: Stage: kargs Apr 21 12:00:34.558043 ignition[917]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.558052 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.566295 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 12:00:34.559360 ignition[917]: kargs: kargs passed Apr 21 12:00:34.559409 ignition[917]: Ignition finished successfully Apr 21 12:00:34.589039 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 12:00:34.605013 ignition[923]: Ignition 2.19.0 Apr 21 12:00:34.605025 ignition[923]: Stage: disks Apr 21 12:00:34.607051 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 12:00:34.605256 ignition[923]: no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:34.611012 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 12:00:34.605271 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:34.616409 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 12:00:34.606137 ignition[923]: disks: disks passed Apr 21 12:00:34.626413 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 12:00:34.606192 ignition[923]: Ignition finished successfully Apr 21 12:00:34.636553 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 12:00:34.653595 systemd[1]: Reached target basic.target - Basic System. Apr 21 12:00:34.664975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 12:00:34.739209 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 21 12:00:34.747408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 12:00:34.762999 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 12:00:34.858844 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 12:00:34.859387 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 12:00:34.862810 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 12:00:34.901993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:00:34.920849 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Apr 21 12:00:34.930882 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:34.930958 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:34.930986 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:00:34.932989 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 12:00:34.945031 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:00:34.943027 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 12:00:34.951924 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 12:00:34.951962 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:00:34.957124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:00:34.964176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 12:00:34.976035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 12:00:35.682717 coreos-metadata[959]: Apr 21 12:00:35.682 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 12:00:35.688774 coreos-metadata[959]: Apr 21 12:00:35.688 INFO Fetch successful Apr 21 12:00:35.692179 coreos-metadata[959]: Apr 21 12:00:35.691 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 21 12:00:35.709298 coreos-metadata[959]: Apr 21 12:00:35.709 INFO Fetch successful Apr 21 12:00:35.730180 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 12:00:35.734309 coreos-metadata[959]: Apr 21 12:00:35.729 INFO wrote hostname ci-4081.3.7-a-a89817d5a7 to /sysroot/etc/hostname Apr 21 12:00:35.739602 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:00:35.749077 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Apr 21 12:00:35.756791 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 12:00:35.762731 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 12:00:36.784867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 12:00:36.795983 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 12:00:36.804985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 12:00:36.811934 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:36.815043 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 12:00:36.841819 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 12:00:36.851269 ignition[1064]: INFO : Ignition 2.19.0 Apr 21 12:00:36.851269 ignition[1064]: INFO : Stage: mount Apr 21 12:00:36.856104 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:36.856104 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:36.856104 ignition[1064]: INFO : mount: mount passed Apr 21 12:00:36.856104 ignition[1064]: INFO : Ignition finished successfully Apr 21 12:00:36.859158 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 12:00:36.877038 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 12:00:36.886083 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 12:00:36.908942 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Apr 21 12:00:36.908998 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 12:00:36.913842 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 12:00:36.918519 kernel: BTRFS info (device sda6): using free space tree Apr 21 12:00:36.925848 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 12:00:36.928049 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 12:00:36.958980 ignition[1094]: INFO : Ignition 2.19.0 Apr 21 12:00:36.958980 ignition[1094]: INFO : Stage: files Apr 21 12:00:36.964277 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:36.964277 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:36.964277 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Apr 21 12:00:36.964277 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 12:00:36.964277 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 12:00:37.057203 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 12:00:37.061580 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 12:00:37.061580 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 12:00:37.057860 unknown[1094]: wrote ssh authorized keys file for user: core Apr 21 12:00:37.095320 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:00:37.101334 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 12:00:37.200770 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 12:00:37.397571 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 12:00:37.397571 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:37.408896 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 21 12:00:37.999987 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 12:00:39.000627 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 12:00:39.000627 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 12:00:39.022149 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 12:00:39.031042 ignition[1094]: INFO : files: files passed Apr 21 12:00:39.031042 ignition[1094]: INFO : Ignition finished successfully Apr 21 12:00:39.027100 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 12:00:39.056214 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 12:00:39.074243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 12:00:39.086208 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 12:00:39.086337 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 12:00:39.100151 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:00:39.100151 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:00:39.113935 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 12:00:39.103878 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:00:39.109324 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 12:00:39.134009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 12:00:39.159145 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 12:00:39.159280 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 12:00:39.170049 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 12:00:39.173210 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 12:00:39.179390 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 12:00:39.192062 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 12:00:39.206814 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:00:39.217999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 12:00:39.234805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:00:39.235048 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:00:39.235586 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 12:00:39.236657 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 12:00:39.236802 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 12:00:39.237713 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 12:00:39.238256 systemd[1]: Stopped target basic.target - Basic System. Apr 21 12:00:39.238748 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 12:00:39.239341 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 12:00:39.239864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 12:00:39.240380 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 12:00:39.240894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 12:00:39.241441 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 12:00:39.241952 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 12:00:39.242417 systemd[1]: Stopped target swap.target - Swaps. Apr 21 12:00:39.242912 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 12:00:39.243046 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 12:00:39.244249 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:00:39.244741 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:00:39.245695 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 12:00:39.290723 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:00:39.294378 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 12:00:39.294556 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 12:00:39.300717 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 12:00:39.300917 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 12:00:39.366744 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 12:00:39.369418 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 12:00:39.374635 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 12:00:39.377449 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 12:00:39.394057 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 12:00:39.399017 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 12:00:39.399214 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:00:39.411640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 12:00:39.414578 ignition[1146]: INFO : Ignition 2.19.0 Apr 21 12:00:39.414578 ignition[1146]: INFO : Stage: umount Apr 21 12:00:39.414578 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 12:00:39.414578 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 21 12:00:39.428957 ignition[1146]: INFO : umount: umount passed Apr 21 12:00:39.428957 ignition[1146]: INFO : Ignition finished successfully Apr 21 12:00:39.419295 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 12:00:39.419516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:00:39.431499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 12:00:39.431663 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 12:00:39.459227 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 12:00:39.459359 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 12:00:39.468130 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 12:00:39.471069 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 12:00:39.476187 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 12:00:39.476258 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 12:00:39.485571 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 12:00:39.485644 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 12:00:39.491216 systemd[1]: Stopped target network.target - Network. Apr 21 12:00:39.493518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 12:00:39.493589 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 12:00:39.506970 systemd[1]: Stopped target paths.target - Path Units. Apr 21 12:00:39.511655 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 12:00:39.511742 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:00:39.518515 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 12:00:39.521097 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 12:00:39.530366 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 12:00:39.530425 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 12:00:39.531031 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 12:00:39.531073 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 12:00:39.531456 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 12:00:39.531505 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 12:00:39.531934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 12:00:39.531975 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 12:00:39.532578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 12:00:39.532931 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 12:00:39.533596 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 12:00:39.533808 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 12:00:39.544172 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 12:00:39.559217 systemd-networkd[903]: eth0: DHCPv6 lease lost Apr 21 12:00:39.559220 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 12:00:39.559389 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 12:00:39.565102 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 12:00:39.565245 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 12:00:39.571713 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 12:00:39.571772 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:00:39.589000 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 12:00:39.591322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 12:00:39.591391 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 12:00:39.665917 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: Data path switched from VF: enP20914s1 Apr 21 12:00:39.591519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 12:00:39.591560 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:00:39.592393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 12:00:39.592428 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 12:00:39.592842 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 12:00:39.592877 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:00:39.593760 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:00:39.626578 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 12:00:39.626748 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:00:39.635616 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 12:00:39.635672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 12:00:39.641066 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 12:00:39.641105 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:00:39.644325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 12:00:39.644383 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 12:00:39.651803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 12:00:39.651873 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 12:00:39.661883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 12:00:39.661937 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 12:00:39.684015 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 12:00:39.693140 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 12:00:39.693212 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:00:39.699337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:00:39.699397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:00:39.749663 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 12:00:39.749792 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 12:00:39.758911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 12:00:39.761993 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 12:00:42.058302 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 12:00:42.058444 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 12:00:42.062391 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 12:00:42.067495 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 12:00:42.067579 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 12:00:42.085151 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 12:00:44.954733 systemd[1]: Switching root. Apr 21 12:00:45.035021 systemd-journald[177]: Journal stopped Apr 21 12:00:52.732179 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 21 12:00:52.732211 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 12:00:52.732227 kernel: SELinux: policy capability open_perms=1 Apr 21 12:00:52.732236 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 12:00:52.732244 kernel: SELinux: policy capability always_check_network=0 Apr 21 12:00:52.732252 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 12:00:52.732262 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 12:00:52.732270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 12:00:52.732281 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 12:00:52.732290 kernel: audit: type=1403 audit(1776772846.965:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 12:00:52.732299 systemd[1]: Successfully loaded SELinux policy in 149.591ms. Apr 21 12:00:52.732311 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.679ms. Apr 21 12:00:52.732321 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 12:00:52.732331 systemd[1]: Detected virtualization microsoft. Apr 21 12:00:52.732344 systemd[1]: Detected architecture x86-64. Apr 21 12:00:52.732353 systemd[1]: Detected first boot. Apr 21 12:00:52.732364 systemd[1]: Hostname set to . Apr 21 12:00:52.732373 systemd[1]: Initializing machine ID from random generator. Apr 21 12:00:52.732384 zram_generator::config[1188]: No configuration found. Apr 21 12:00:52.732396 systemd[1]: Populated /etc with preset unit settings. Apr 21 12:00:52.732406 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 12:00:52.732415 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 12:00:52.732425 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 12:00:52.732435 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 12:00:52.732446 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 12:00:52.732456 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 12:00:52.732468 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 12:00:52.732478 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 12:00:52.732488 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 12:00:52.732498 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 12:00:52.732508 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 12:00:52.732518 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 12:00:52.732529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 12:00:52.732539 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 12:00:52.732551 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 12:00:52.732561 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 12:00:52.732571 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 12:00:52.732581 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 12:00:52.732591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 12:00:52.732601 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 12:00:52.732614 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 12:00:52.732625 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 12:00:52.732635 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 12:00:52.732647 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 12:00:52.732658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 12:00:52.732668 systemd[1]: Reached target slices.target - Slice Units. Apr 21 12:00:52.732678 systemd[1]: Reached target swap.target - Swaps. Apr 21 12:00:52.732688 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 12:00:52.732698 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 12:00:52.732708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 12:00:52.732721 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 12:00:52.732732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 12:00:52.732742 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 12:00:52.732754 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 12:00:52.732764 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 12:00:52.732777 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 12:00:52.732787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:00:52.732798 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 12:00:52.732808 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 12:00:52.732819 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 12:00:52.732843 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 12:00:52.732854 systemd[1]: Reached target machines.target - Containers. Apr 21 12:00:52.732864 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 12:00:52.732877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:00:52.732888 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 12:00:52.732898 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 12:00:52.732909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:00:52.732919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 12:00:52.732930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 12:00:52.732940 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 12:00:52.732950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 12:00:52.732963 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 12:00:52.732973 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 12:00:52.732984 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 12:00:52.732995 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 12:00:52.733006 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 12:00:52.733016 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 12:00:52.733026 kernel: ACPI: bus type drm_connector registered Apr 21 12:00:52.733036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 12:00:52.733047 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 12:00:52.733059 kernel: fuse: init (API version 7.39) Apr 21 12:00:52.733069 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 12:00:52.733079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 12:00:52.733090 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 12:00:52.733100 systemd[1]: Stopped verity-setup.service. Apr 21 12:00:52.733111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:00:52.733121 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 12:00:52.733131 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 12:00:52.733144 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 12:00:52.733155 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 12:00:52.733165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 12:00:52.733175 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 12:00:52.733186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 12:00:52.733199 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 12:00:52.733212 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 12:00:52.733224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:00:52.733239 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:00:52.733252 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 12:00:52.733268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 12:00:52.733315 systemd-journald[1280]: Collecting audit messages is disabled. Apr 21 12:00:52.733343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 12:00:52.733359 kernel: loop: module loaded Apr 21 12:00:52.733374 systemd-journald[1280]: Journal started Apr 21 12:00:52.733405 systemd-journald[1280]: Runtime Journal (/run/log/journal/80fd38f791e649ebb2fe919ad867d4b3) is 8.0M, max 158.7M, 150.7M free. Apr 21 12:00:51.487239 systemd[1]: Queued start job for default target multi-user.target. Apr 21 12:00:51.678911 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 12:00:51.679311 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 12:00:52.740882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 12:00:52.751320 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 12:00:52.752037 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 12:00:52.752223 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 12:00:52.755921 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 12:00:52.756095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 12:00:52.759501 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 12:00:52.763208 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 12:00:52.785926 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 12:00:52.797971 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 12:00:52.813159 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 12:00:52.816605 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 12:00:52.816656 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 12:00:52.821457 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 12:00:52.831087 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 12:00:52.836243 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 12:00:52.839544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:00:52.850009 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 12:00:52.857978 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 12:00:52.861697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 12:00:52.868041 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 12:00:52.871296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 12:00:52.872903 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 12:00:52.880390 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 12:00:52.883743 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 12:00:52.893251 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 12:00:52.900421 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 12:00:52.927904 udevadm[1325]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 12:00:53.082172 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 12:00:53.111225 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 12:00:53.120089 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 12:00:53.213609 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 12:00:53.230230 systemd-journald[1280]: Time spent on flushing to /var/log/journal/80fd38f791e649ebb2fe919ad867d4b3 is 16.756ms for 956 entries. Apr 21 12:00:53.230230 systemd-journald[1280]: System Journal (/var/log/journal/80fd38f791e649ebb2fe919ad867d4b3) is 8.0M, max 2.6G, 2.6G free. Apr 21 12:00:55.252270 systemd-journald[1280]: Received client request to flush runtime journal. Apr 21 12:00:55.252393 kernel: loop0: detected capacity change from 0 to 142488 Apr 21 12:00:53.227035 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 12:00:53.656076 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 12:00:53.663699 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 12:00:53.675026 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 12:00:54.226166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 12:00:54.824069 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 12:00:54.834073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 12:00:55.250449 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Apr 21 12:00:55.250463 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Apr 21 12:00:55.255372 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 12:00:55.262507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 12:00:56.679859 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 12:00:56.959858 kernel: loop1: detected capacity change from 0 to 219192 Apr 21 12:00:57.325852 kernel: loop2: detected capacity change from 0 to 31056 Apr 21 12:00:57.951485 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 12:00:57.961964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 12:00:57.996499 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Apr 21 12:00:58.564246 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 12:00:58.565113 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 12:00:59.404857 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 12:00:59.521300 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 12:00:59.535985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 12:00:59.606834 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 12:01:00.072083 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 12:01:00.103936 kernel: hv_vmbus: registering driver hv_balloon Apr 21 12:01:00.110151 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 21 12:01:00.110261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#90 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 21 12:01:00.167854 kernel: hv_vmbus: registering driver hyperv_fb Apr 21 12:01:00.174123 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 21 12:01:00.174234 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 21 12:01:00.178663 kernel: Console: switching to colour dummy device 80x25 Apr 21 12:01:00.178458 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 12:01:00.183843 kernel: Console: switching to colour frame buffer device 128x48 Apr 21 12:01:00.201193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:00.207138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:00.207361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:00.220166 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:00.270347 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 12:01:00.298666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 12:01:00.299570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:00.327024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 12:01:00.394668 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1360) Apr 21 12:01:00.459269 systemd-networkd[1353]: lo: Link UP Apr 21 12:01:00.459282 systemd-networkd[1353]: lo: Gained carrier Apr 21 12:01:00.462496 systemd-networkd[1353]: Enumeration completed Apr 21 12:01:00.462644 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 12:01:00.469135 systemd-networkd[1353]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:00.469144 systemd-networkd[1353]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:01:00.469568 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 12:01:00.541469 kernel: mlx5_core 51b2:00:02.0 enP20914s1: Link up Apr 21 12:01:00.571850 kernel: hv_netvsc 000d3ade-2385-000d-3ade-2385000d3ade eth0: Data path switched to VF: enP20914s1 Apr 21 12:01:00.574517 systemd-networkd[1353]: enP20914s1: Link UP Apr 21 12:01:00.574673 systemd-networkd[1353]: eth0: Link UP Apr 21 12:01:00.574684 systemd-networkd[1353]: eth0: Gained carrier Apr 21 12:01:00.574704 systemd-networkd[1353]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:00.593268 systemd-networkd[1353]: enP20914s1: Gained carrier Apr 21 12:01:00.609516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 21 12:01:00.619098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 12:01:00.629905 systemd-networkd[1353]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:01:00.680010 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 21 12:01:00.692928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 12:01:00.752413 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 12:01:00.763125 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 12:01:00.781845 kernel: loop4: detected capacity change from 0 to 142488 Apr 21 12:01:00.803859 kernel: loop5: detected capacity change from 0 to 219192 Apr 21 12:01:00.845478 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 12:01:00.876896 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 12:01:00.882529 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 12:01:00.889117 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 12:01:00.894031 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 12:01:00.913850 kernel: loop6: detected capacity change from 0 to 31056 Apr 21 12:01:00.921162 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 12:01:00.931851 kernel: loop7: detected capacity change from 0 to 140768 Apr 21 12:01:00.947572 (sd-merge)[1447]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 21 12:01:00.948200 (sd-merge)[1447]: Merged extensions into '/usr'. Apr 21 12:01:00.952905 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 12:01:00.952921 systemd[1]: Reloading... Apr 21 12:01:01.024924 zram_generator::config[1481]: No configuration found. Apr 21 12:01:01.219796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:01:01.323401 systemd[1]: Reloading finished in 369 ms. Apr 21 12:01:01.355833 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 12:01:01.360459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 12:01:01.379007 systemd[1]: Starting ensure-sysext.service... Apr 21 12:01:01.385033 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 12:01:01.400912 systemd[1]: Reloading requested from client PID 1543 ('systemctl') (unit ensure-sysext.service)... Apr 21 12:01:01.401264 systemd[1]: Reloading... Apr 21 12:01:01.422561 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 12:01:01.423128 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 12:01:01.424443 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 12:01:01.424930 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 21 12:01:01.425016 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Apr 21 12:01:01.494364 zram_generator::config[1574]: No configuration found. Apr 21 12:01:01.496325 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 12:01:01.496339 systemd-tmpfiles[1544]: Skipping /boot Apr 21 12:01:01.514694 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 12:01:01.514710 systemd-tmpfiles[1544]: Skipping /boot Apr 21 12:01:01.649672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:01:01.728040 systemd[1]: Reloading finished in 326 ms. Apr 21 12:01:01.750328 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 12:01:01.756216 systemd-networkd[1353]: eth0: Gained IPv6LL Apr 21 12:01:01.765487 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 12:01:01.780107 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 12:01:01.785289 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 12:01:01.793949 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 12:01:01.803137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 12:01:01.807997 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 12:01:01.817156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:01.817424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:01:01.820157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:01:01.827107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 12:01:01.835138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 12:01:01.838653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:01:01.838808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:01.840597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:01:01.840812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:01:01.848127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 12:01:01.848320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 12:01:01.853209 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 12:01:01.853375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 12:01:01.868737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:01.870129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:01:01.878153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:01:01.891128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 12:01:01.904181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 12:01:01.911342 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:01:01.911517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:01.914568 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 12:01:01.925505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:01:01.925792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:01:01.930719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 12:01:01.931061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 12:01:01.935098 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 12:01:01.935275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 12:01:01.941955 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 12:01:01.957088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:01.958233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 12:01:01.965940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 12:01:01.967336 systemd-resolved[1638]: Positive Trust Anchors: Apr 21 12:01:01.967347 systemd-resolved[1638]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 12:01:01.967388 systemd-resolved[1638]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 12:01:01.982391 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 12:01:01.990011 augenrules[1670]: No rules Apr 21 12:01:01.989954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 12:01:01.996099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 12:01:01.999296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 12:01:01.999516 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 12:01:02.002764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 12:01:02.005337 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 12:01:02.010262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 12:01:02.010441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 12:01:02.014296 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 12:01:02.014518 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 12:01:02.018333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 12:01:02.018461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 12:01:02.022484 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 12:01:02.022806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 12:01:02.028603 systemd-resolved[1638]: Using system hostname 'ci-4081.3.7-a-a89817d5a7'. Apr 21 12:01:02.032215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 12:01:02.036322 systemd[1]: Finished ensure-sysext.service. Apr 21 12:01:02.043251 systemd[1]: Reached target network.target - Network. Apr 21 12:01:02.045983 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 12:01:02.049209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 12:01:02.052962 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 12:01:02.053040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 12:01:02.842559 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 12:01:02.848701 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 12:01:07.146750 ldconfig[1317]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 12:01:07.165777 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 12:01:07.177011 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 12:01:07.186749 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 12:01:07.190391 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 12:01:07.193667 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 12:01:07.197132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 12:01:07.200656 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 12:01:07.203966 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 12:01:07.207618 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 12:01:07.211315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 12:01:07.211346 systemd[1]: Reached target paths.target - Path Units. Apr 21 12:01:07.213937 systemd[1]: Reached target timers.target - Timer Units. Apr 21 12:01:07.217770 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 12:01:07.222153 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 12:01:07.234707 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 12:01:07.238391 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 12:01:07.241762 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 12:01:07.244605 systemd[1]: Reached target basic.target - Basic System. Apr 21 12:01:07.247294 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 12:01:07.247314 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 12:01:07.256058 systemd[1]: Starting chronyd.service - NTP client/server... Apr 21 12:01:07.261944 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 12:01:07.274810 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 12:01:07.280015 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 12:01:07.286036 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 12:01:07.296492 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 12:01:07.299462 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 12:01:07.299507 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 21 12:01:07.303981 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 21 12:01:07.307168 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 21 12:01:07.308666 jq[1694]: false Apr 21 12:01:07.315930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:01:07.321727 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 12:01:07.327368 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 12:01:07.333268 (chronyd)[1690]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 21 12:01:07.337289 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 12:01:07.346631 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 12:01:07.353011 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 12:01:07.360565 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 12:01:07.364606 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 12:01:07.365157 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 12:01:07.367998 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 12:01:07.380851 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 12:01:07.387129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 12:01:07.388872 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 12:01:07.396140 jq[1711]: true Apr 21 12:01:07.423155 jq[1715]: true Apr 21 12:01:07.433737 KVP[1698]: KVP starting; pid is:1698 Apr 21 12:01:07.457860 KVP[1698]: KVP LIC Version: 3.1 Apr 21 12:01:07.458846 kernel: hv_utils: KVP IC version 4.0 Apr 21 12:01:07.534276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 12:01:07.534912 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 12:01:07.547308 (ntainerd)[1737]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 12:01:07.551389 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 12:01:07.551629 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 12:01:07.558184 extend-filesystems[1697]: Found loop4 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found loop5 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found loop6 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found loop7 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda1 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda2 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda3 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found usr Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda4 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda6 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda7 Apr 21 12:01:07.558184 extend-filesystems[1697]: Found sda9 Apr 21 12:01:07.558184 extend-filesystems[1697]: Checking size of /dev/sda9 Apr 21 12:01:07.571616 chronyd[1751]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 21 12:01:07.607803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 12:01:07.615807 tar[1714]: linux-amd64/LICENSE Apr 21 12:01:07.615807 tar[1714]: linux-amd64/helm Apr 21 12:01:07.620889 chronyd[1751]: Timezone right/UTC failed leap second check, ignoring Apr 21 12:01:07.621120 chronyd[1751]: Loaded seccomp filter (level 2) Apr 21 12:01:07.624267 systemd[1]: Started chronyd.service - NTP client/server. Apr 21 12:01:07.641350 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 12:01:07.644943 systemd-logind[1709]: New seat seat0. Apr 21 12:01:07.645701 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 12:01:07.658405 update_engine[1710]: I20260421 12:01:07.658316 1710 main.cc:92] Flatcar Update Engine starting Apr 21 12:01:07.976854 extend-filesystems[1697]: Old size kept for /dev/sda9 Apr 21 12:01:07.976854 extend-filesystems[1697]: Found sr0 Apr 21 12:01:07.978432 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 12:01:07.978642 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 12:01:08.007895 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1760) Apr 21 12:01:08.022337 bash[1736]: Updated "/home/core/.ssh/authorized_keys" Apr 21 12:01:08.023410 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 12:01:08.028814 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 12:01:08.039922 dbus-daemon[1693]: [system] SELinux support is enabled Apr 21 12:01:08.042910 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 12:01:08.067552 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 12:01:08.067596 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 12:01:08.076446 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 12:01:08.076483 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 12:01:08.085120 update_engine[1710]: I20260421 12:01:08.084913 1710 update_check_scheduler.cc:74] Next update check in 9m26s Apr 21 12:01:08.096743 dbus-daemon[1693]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 12:01:08.104421 systemd[1]: Started update-engine.service - Update Engine. Apr 21 12:01:08.116168 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 12:01:08.196007 coreos-metadata[1692]: Apr 21 12:01:08.195 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 21 12:01:08.198258 coreos-metadata[1692]: Apr 21 12:01:08.198 INFO Fetch successful Apr 21 12:01:08.198258 coreos-metadata[1692]: Apr 21 12:01:08.198 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 21 12:01:08.203934 coreos-metadata[1692]: Apr 21 12:01:08.203 INFO Fetch successful Apr 21 12:01:08.204058 coreos-metadata[1692]: Apr 21 12:01:08.203 INFO Fetching http://168.63.129.16/machine/fb94c529-aa05-4e1b-a266-1454afab70fd/321069d9%2Dc974%2D4862%2Da9fa%2D8131d248f0cd.%5Fci%2D4081.3.7%2Da%2Da89817d5a7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 21 12:01:08.206967 coreos-metadata[1692]: Apr 21 12:01:08.205 INFO Fetch successful Apr 21 12:01:08.206967 coreos-metadata[1692]: Apr 21 12:01:08.205 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 21 12:01:08.218925 coreos-metadata[1692]: Apr 21 12:01:08.217 INFO Fetch successful Apr 21 12:01:08.272346 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 12:01:08.276389 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 12:01:08.502407 tar[1714]: linux-amd64/README.md Apr 21 12:01:08.502516 sshd_keygen[1754]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 12:01:08.520839 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 12:01:08.544917 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 12:01:08.556082 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 12:01:08.560048 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 21 12:01:08.572845 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 12:01:08.573131 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 12:01:08.586652 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 12:01:08.593747 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 21 12:01:08.610983 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 12:01:08.625256 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 12:01:08.629683 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 12:01:08.634191 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 12:01:08.730028 locksmithd[1799]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 12:01:08.889403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:01:08.901163 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:01:09.414962 kubelet[1842]: E0421 12:01:09.414862 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:01:09.417236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:01:09.417624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:01:10.360427 containerd[1737]: time="2026-04-21T12:01:10.360333000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 12:01:10.382202 containerd[1737]: time="2026-04-21T12:01:10.382151300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:10.383743 containerd[1737]: time="2026-04-21T12:01:10.383699600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:10.383743 containerd[1737]: time="2026-04-21T12:01:10.383735600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 12:01:10.383895 containerd[1737]: time="2026-04-21T12:01:10.383755200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 12:01:10.384029 containerd[1737]: time="2026-04-21T12:01:10.383986500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 12:01:10.384092 containerd[1737]: time="2026-04-21T12:01:10.384037300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384148 containerd[1737]: time="2026-04-21T12:01:10.384121500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384188 containerd[1737]: time="2026-04-21T12:01:10.384146300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384360 containerd[1737]: time="2026-04-21T12:01:10.384334300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384360 containerd[1737]: time="2026-04-21T12:01:10.384357400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384448 containerd[1737]: time="2026-04-21T12:01:10.384375000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384448 containerd[1737]: time="2026-04-21T12:01:10.384389400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384530 containerd[1737]: time="2026-04-21T12:01:10.384489400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384733 containerd[1737]: time="2026-04-21T12:01:10.384704900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384900 containerd[1737]: time="2026-04-21T12:01:10.384876500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 12:01:10.384950 containerd[1737]: time="2026-04-21T12:01:10.384898300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 12:01:10.385033 containerd[1737]: time="2026-04-21T12:01:10.385010700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 12:01:10.385108 containerd[1737]: time="2026-04-21T12:01:10.385089100Z" level=info msg="metadata content store policy set" policy=shared Apr 21 12:01:10.691890 containerd[1737]: time="2026-04-21T12:01:10.691769000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 12:01:10.691890 containerd[1737]: time="2026-04-21T12:01:10.691861100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 12:01:10.692047 containerd[1737]: time="2026-04-21T12:01:10.691901400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 12:01:10.692047 containerd[1737]: time="2026-04-21T12:01:10.691932100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 12:01:10.692047 containerd[1737]: time="2026-04-21T12:01:10.691953400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 12:01:10.692155 containerd[1737]: time="2026-04-21T12:01:10.692137600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 12:01:10.692403 containerd[1737]: time="2026-04-21T12:01:10.692381400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 12:01:10.692536 containerd[1737]: time="2026-04-21T12:01:10.692513400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 12:01:10.692592 containerd[1737]: time="2026-04-21T12:01:10.692541000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 12:01:10.692592 containerd[1737]: time="2026-04-21T12:01:10.692559300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 12:01:10.692669 containerd[1737]: time="2026-04-21T12:01:10.692590200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692669 containerd[1737]: time="2026-04-21T12:01:10.692613700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692669 containerd[1737]: time="2026-04-21T12:01:10.692632200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692669 containerd[1737]: time="2026-04-21T12:01:10.692651600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692802 containerd[1737]: time="2026-04-21T12:01:10.692673500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692802 containerd[1737]: time="2026-04-21T12:01:10.692691600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692802 containerd[1737]: time="2026-04-21T12:01:10.692709300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692802 containerd[1737]: time="2026-04-21T12:01:10.692727900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 12:01:10.692802 containerd[1737]: time="2026-04-21T12:01:10.692753200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.692802 containerd[1737]: time="2026-04-21T12:01:10.692772600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.692802 containerd[1737]: time="2026-04-21T12:01:10.692790400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692809200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692839100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692860300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692877000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692895200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692922900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692945800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692962500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692980200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.692998500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.693028200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 12:01:10.693067 containerd[1737]: time="2026-04-21T12:01:10.693057100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693073700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693089900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693176200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693203600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693220500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693237200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693252800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693270400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693284400Z" level=info msg="NRI interface is disabled by configuration." Apr 21 12:01:10.693459 containerd[1737]: time="2026-04-21T12:01:10.693299400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 12:01:10.693791 containerd[1737]: time="2026-04-21T12:01:10.693662300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 12:01:10.693791 containerd[1737]: time="2026-04-21T12:01:10.693742000Z" level=info msg="Connect containerd service" Apr 21 12:01:10.694202 containerd[1737]: time="2026-04-21T12:01:10.693799700Z" level=info msg="using legacy CRI server" Apr 21 12:01:10.694202 containerd[1737]: time="2026-04-21T12:01:10.693810200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 12:01:10.694725 containerd[1737]: time="2026-04-21T12:01:10.694667400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 12:01:10.695324 containerd[1737]: time="2026-04-21T12:01:10.695292500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695499100Z" level=info msg="Start subscribing containerd event" Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695565700Z" level=info msg="Start recovering state" Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695647100Z" level=info msg="Start event monitor" Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695653400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695671000Z" level=info msg="Start snapshots syncer" Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695717000Z" level=info msg="Start cni network conf syncer for default" Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695728300Z" level=info msg="Start streaming server" Apr 21 12:01:10.695852 containerd[1737]: time="2026-04-21T12:01:10.695729700Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 12:01:10.696930 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 12:01:10.701083 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 12:01:10.705763 containerd[1737]: time="2026-04-21T12:01:10.705630000Z" level=info msg="containerd successfully booted in 0.346293s" Apr 21 12:01:10.706190 systemd[1]: Startup finished in 1.049s (kernel) + 18.014s (initrd) + 23.888s (userspace) = 42.953s. Apr 21 12:01:11.173507 login[1832]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 21 12:01:11.174167 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 21 12:01:11.183375 systemd-logind[1709]: New session 2 of user core. Apr 21 12:01:11.185321 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 12:01:11.192464 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 12:01:11.381709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 12:01:11.390154 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 12:01:11.394702 (systemd)[1862]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 12:01:11.519550 systemd[1862]: Queued start job for default target default.target. Apr 21 12:01:11.527970 systemd[1862]: Created slice app.slice - User Application Slice. Apr 21 12:01:11.528009 systemd[1862]: Reached target paths.target - Paths. Apr 21 12:01:11.528027 systemd[1862]: Reached target timers.target - Timers. Apr 21 12:01:11.529291 systemd[1862]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 12:01:11.540501 systemd[1862]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 12:01:11.540627 systemd[1862]: Reached target sockets.target - Sockets. Apr 21 12:01:11.540646 systemd[1862]: Reached target basic.target - Basic System. Apr 21 12:01:11.540689 systemd[1862]: Reached target default.target - Main User Target. Apr 21 12:01:11.540725 systemd[1862]: Startup finished in 138ms. Apr 21 12:01:11.541076 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 12:01:11.546987 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 12:01:12.175213 login[1832]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 21 12:01:12.179960 systemd-logind[1709]: New session 1 of user core. Apr 21 12:01:12.190023 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 12:01:15.651637 waagent[1829]: 2026-04-21T12:01:15.651530Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.652159Z INFO Daemon Daemon OS: flatcar 4081.3.7 Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.653211Z INFO Daemon Daemon Python: 3.11.9 Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.654309Z INFO Daemon Daemon Run daemon Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.655192Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.7' Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.656186Z INFO Daemon Daemon Using waagent for provisioning Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.656845Z INFO Daemon Daemon Activate resource disk Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.657198Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.661556Z INFO Daemon Daemon Found device: None Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.662341Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.662854Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.664959Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 21 12:01:15.671778 waagent[1829]: 2026-04-21T12:01:15.665866Z INFO Daemon Daemon Running default provisioning handler Apr 21 12:01:16.105559 waagent[1829]: 2026-04-21T12:01:15.696078Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 21 12:01:16.115451 waagent[1829]: 2026-04-21T12:01:16.115367Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 21 12:01:16.122535 waagent[1829]: 2026-04-21T12:01:16.122440Z INFO Daemon Daemon cloud-init is enabled: False Apr 21 12:01:16.125369 waagent[1829]: 2026-04-21T12:01:16.125311Z INFO Daemon Daemon Copying ovf-env.xml Apr 21 12:01:16.695847 waagent[1829]: 2026-04-21T12:01:16.695026Z INFO Daemon Daemon Successfully mounted dvd Apr 21 12:01:16.726409 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 21 12:01:16.728779 waagent[1829]: 2026-04-21T12:01:16.728697Z INFO Daemon Daemon Detect protocol endpoint Apr 21 12:01:16.731929 waagent[1829]: 2026-04-21T12:01:16.731867Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 21 12:01:16.735485 waagent[1829]: 2026-04-21T12:01:16.735433Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 21 12:01:16.739192 waagent[1829]: 2026-04-21T12:01:16.739141Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 21 12:01:16.742194 waagent[1829]: 2026-04-21T12:01:16.742143Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 21 12:01:16.745201 waagent[1829]: 2026-04-21T12:01:16.745148Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 21 12:01:16.771771 waagent[1829]: 2026-04-21T12:01:16.771709Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 21 12:01:16.780706 waagent[1829]: 2026-04-21T12:01:16.772337Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 21 12:01:16.780706 waagent[1829]: 2026-04-21T12:01:16.772792Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 21 12:01:16.979640 waagent[1829]: 2026-04-21T12:01:16.979482Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 21 12:01:16.985905 waagent[1829]: 2026-04-21T12:01:16.979935Z INFO Daemon Daemon Forcing an update of the goal state. Apr 21 12:01:16.987932 waagent[1829]: 2026-04-21T12:01:16.987880Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 21 12:01:17.002957 waagent[1829]: 2026-04-21T12:01:17.002908Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 21 12:01:17.020473 waagent[1829]: 2026-04-21T12:01:17.003573Z INFO Daemon Apr 21 12:01:17.020473 waagent[1829]: 2026-04-21T12:01:17.003716Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 14cc4cfd-c6c9-4c33-93c5-aad7eba6d738 eTag: 5164228296586714851 source: Fabric] Apr 21 12:01:17.020473 waagent[1829]: 2026-04-21T12:01:17.004987Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 21 12:01:17.020473 waagent[1829]: 2026-04-21T12:01:17.005725Z INFO Daemon Apr 21 12:01:17.020473 waagent[1829]: 2026-04-21T12:01:17.006676Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 21 12:01:17.022892 waagent[1829]: 2026-04-21T12:01:17.022845Z INFO Daemon Daemon Downloading artifacts profile blob Apr 21 12:01:17.137160 waagent[1829]: 2026-04-21T12:01:17.137068Z INFO Daemon Downloaded certificate {'thumbprint': '733275D4756C16B6B9FD697A065E991BFC6205C6', 'hasPrivateKey': True} Apr 21 12:01:17.143104 waagent[1829]: 2026-04-21T12:01:17.143033Z INFO Daemon Fetch goal state completed Apr 21 12:01:17.177390 waagent[1829]: 2026-04-21T12:01:17.177321Z INFO Daemon Daemon Starting provisioning Apr 21 12:01:17.180651 waagent[1829]: 2026-04-21T12:01:17.180507Z INFO Daemon Daemon Handle ovf-env.xml. Apr 21 12:01:17.194656 waagent[1829]: 2026-04-21T12:01:17.180741Z INFO Daemon Daemon Set hostname [ci-4081.3.7-a-a89817d5a7] Apr 21 12:01:17.194656 waagent[1829]: 2026-04-21T12:01:17.183646Z INFO Daemon Daemon Publish hostname [ci-4081.3.7-a-a89817d5a7] Apr 21 12:01:17.194656 waagent[1829]: 2026-04-21T12:01:17.184577Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 21 12:01:17.194656 waagent[1829]: 2026-04-21T12:01:17.185635Z INFO Daemon Daemon Primary interface is [eth0] Apr 21 12:01:17.243663 systemd-networkd[1353]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 12:01:17.243673 systemd-networkd[1353]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 12:01:17.243727 systemd-networkd[1353]: eth0: DHCP lease lost Apr 21 12:01:17.245171 waagent[1829]: 2026-04-21T12:01:17.245084Z INFO Daemon Daemon Create user account if not exists Apr 21 12:01:17.247849 waagent[1829]: 2026-04-21T12:01:17.245561Z INFO Daemon Daemon User core already exists, skip useradd Apr 21 12:01:17.247849 waagent[1829]: 2026-04-21T12:01:17.246604Z INFO Daemon Daemon Configure sudoer Apr 21 12:01:17.253919 systemd-networkd[1353]: eth0: DHCPv6 lease lost Apr 21 12:01:17.293877 systemd-networkd[1353]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 21 12:01:17.309956 waagent[1829]: 2026-04-21T12:01:17.309855Z INFO Daemon Daemon Configure sshd Apr 21 12:01:17.313110 waagent[1829]: 2026-04-21T12:01:17.313029Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 21 12:01:17.319433 waagent[1829]: 2026-04-21T12:01:17.319359Z INFO Daemon Daemon Deploy ssh public key. Apr 21 12:01:18.444130 waagent[1829]: 2026-04-21T12:01:18.444042Z INFO Daemon Daemon Provisioning complete Apr 21 12:01:18.460647 waagent[1829]: 2026-04-21T12:01:18.460574Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 21 12:01:18.464562 waagent[1829]: 2026-04-21T12:01:18.464479Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 21 12:01:18.469832 waagent[1829]: 2026-04-21T12:01:18.469753Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 21 12:01:18.596509 waagent[1912]: 2026-04-21T12:01:18.596400Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 21 12:01:18.596941 waagent[1912]: 2026-04-21T12:01:18.596573Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.7 Apr 21 12:01:18.596941 waagent[1912]: 2026-04-21T12:01:18.596662Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 21 12:01:18.641877 waagent[1912]: 2026-04-21T12:01:18.641765Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.7; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 21 12:01:18.642115 waagent[1912]: 2026-04-21T12:01:18.642063Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 12:01:18.642213 waagent[1912]: 2026-04-21T12:01:18.642168Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 12:01:18.649889 waagent[1912]: 2026-04-21T12:01:18.649801Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 21 12:01:18.654941 waagent[1912]: 2026-04-21T12:01:18.654895Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 21 12:01:18.655372 waagent[1912]: 2026-04-21T12:01:18.655318Z INFO ExtHandler Apr 21 12:01:18.655454 waagent[1912]: 2026-04-21T12:01:18.655410Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a53fb7f1-509d-4718-a428-53db8046e4a1 eTag: 5164228296586714851 source: Fabric] Apr 21 12:01:18.655759 waagent[1912]: 2026-04-21T12:01:18.655708Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 21 12:01:18.656322 waagent[1912]: 2026-04-21T12:01:18.656266Z INFO ExtHandler Apr 21 12:01:18.656394 waagent[1912]: 2026-04-21T12:01:18.656353Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 21 12:01:18.659458 waagent[1912]: 2026-04-21T12:01:18.659414Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 21 12:01:18.718926 waagent[1912]: 2026-04-21T12:01:18.718760Z INFO ExtHandler Downloaded certificate {'thumbprint': '733275D4756C16B6B9FD697A065E991BFC6205C6', 'hasPrivateKey': True} Apr 21 12:01:18.719404 waagent[1912]: 2026-04-21T12:01:18.719330Z INFO ExtHandler Fetch goal state completed Apr 21 12:01:18.733741 waagent[1912]: 2026-04-21T12:01:18.733675Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1912 Apr 21 12:01:18.733922 waagent[1912]: 2026-04-21T12:01:18.733862Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 21 12:01:18.735471 waagent[1912]: 2026-04-21T12:01:18.735413Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.7', '', 'Flatcar Container Linux by Kinvolk'] Apr 21 12:01:18.735839 waagent[1912]: 2026-04-21T12:01:18.735775Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 21 12:01:18.806772 waagent[1912]: 2026-04-21T12:01:18.806715Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 21 12:01:18.807025 waagent[1912]: 2026-04-21T12:01:18.806978Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 21 12:01:18.813658 waagent[1912]: 2026-04-21T12:01:18.813613Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 21 12:01:18.820706 systemd[1]: Reloading requested from client PID 1925 ('systemctl') (unit waagent.service)... Apr 21 12:01:18.820724 systemd[1]: Reloading... Apr 21 12:01:18.917911 zram_generator::config[1962]: No configuration found. Apr 21 12:01:19.029424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:01:19.111124 systemd[1]: Reloading finished in 289 ms. Apr 21 12:01:19.140859 waagent[1912]: 2026-04-21T12:01:19.139119Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 21 12:01:19.146585 systemd[1]: Reloading requested from client PID 2016 ('systemctl') (unit waagent.service)... Apr 21 12:01:19.146602 systemd[1]: Reloading... Apr 21 12:01:19.231849 zram_generator::config[2047]: No configuration found. Apr 21 12:01:19.361035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:01:19.442762 systemd[1]: Reloading finished in 295 ms. Apr 21 12:01:19.473753 waagent[1912]: 2026-04-21T12:01:19.472011Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 21 12:01:19.473753 waagent[1912]: 2026-04-21T12:01:19.472195Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 21 12:01:19.481046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 12:01:19.487060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:01:19.648839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:01:19.653363 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:01:20.325936 kubelet[2119]: E0421 12:01:20.325878 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:01:20.330318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:01:20.330529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:01:22.344727 waagent[1912]: 2026-04-21T12:01:22.344634Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 21 12:01:22.345431 waagent[1912]: 2026-04-21T12:01:22.345366Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 21 12:01:22.346229 waagent[1912]: 2026-04-21T12:01:22.346151Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 21 12:01:22.346695 waagent[1912]: 2026-04-21T12:01:22.346629Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 21 12:01:22.346907 waagent[1912]: 2026-04-21T12:01:22.346865Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 12:01:22.347661 waagent[1912]: 2026-04-21T12:01:22.347567Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 21 12:01:22.347732 waagent[1912]: 2026-04-21T12:01:22.347664Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 21 12:01:22.347929 waagent[1912]: 2026-04-21T12:01:22.347886Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 12:01:22.348194 waagent[1912]: 2026-04-21T12:01:22.348121Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 21 12:01:22.348445 waagent[1912]: 2026-04-21T12:01:22.348401Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 21 12:01:22.348445 waagent[1912]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 21 12:01:22.348445 waagent[1912]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 21 12:01:22.348445 waagent[1912]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 21 12:01:22.348445 waagent[1912]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 21 12:01:22.348445 waagent[1912]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 21 12:01:22.348445 waagent[1912]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 21 12:01:22.349137 waagent[1912]: 2026-04-21T12:01:22.348212Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 21 12:01:22.349137 waagent[1912]: 2026-04-21T12:01:22.349015Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 21 12:01:22.349248 waagent[1912]: 2026-04-21T12:01:22.349193Z INFO EnvHandler ExtHandler Configure routes Apr 21 12:01:22.349329 waagent[1912]: 2026-04-21T12:01:22.349287Z INFO EnvHandler ExtHandler Gateway:None Apr 21 12:01:22.349414 waagent[1912]: 2026-04-21T12:01:22.349371Z INFO EnvHandler ExtHandler Routes:None Apr 21 12:01:22.349693 waagent[1912]: 2026-04-21T12:01:22.349625Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 21 12:01:22.349928 waagent[1912]: 2026-04-21T12:01:22.349882Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 21 12:01:22.350034 waagent[1912]: 2026-04-21T12:01:22.349994Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 21 12:01:22.362151 waagent[1912]: 2026-04-21T12:01:22.362085Z INFO ExtHandler ExtHandler Apr 21 12:01:22.362236 waagent[1912]: 2026-04-21T12:01:22.362185Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 167ab1ca-41df-4f5d-8158-e3978155daae correlation 50fcd0f0-52a4-4481-9e4b-6acab3aa62d5 created: 2026-04-21T11:59:57.221451Z] Apr 21 12:01:22.362585 waagent[1912]: 2026-04-21T12:01:22.362537Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 21 12:01:22.363122 waagent[1912]: 2026-04-21T12:01:22.363077Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Apr 21 12:01:22.397492 waagent[1912]: 2026-04-21T12:01:22.397425Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 335EF22F-5820-4ADA-9A87-31D21AF2A95D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 21 12:01:22.437790 waagent[1912]: 2026-04-21T12:01:22.437708Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 21 12:01:22.437790 waagent[1912]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:01:22.437790 waagent[1912]: pkts bytes target prot opt in out source destination Apr 21 12:01:22.437790 waagent[1912]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:01:22.437790 waagent[1912]: pkts bytes target prot opt in out source destination Apr 21 12:01:22.437790 waagent[1912]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:01:22.437790 waagent[1912]: pkts bytes target prot opt in out source destination Apr 21 12:01:22.437790 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 21 12:01:22.437790 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 21 12:01:22.437790 waagent[1912]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 21 12:01:22.441171 waagent[1912]: 2026-04-21T12:01:22.441110Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 21 12:01:22.441171 waagent[1912]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:01:22.441171 waagent[1912]: pkts bytes target prot opt in out source destination Apr 21 12:01:22.441171 waagent[1912]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:01:22.441171 waagent[1912]: pkts bytes target prot opt in out source destination Apr 21 12:01:22.441171 waagent[1912]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 21 12:01:22.441171 waagent[1912]: pkts bytes target prot opt in out source destination Apr 21 12:01:22.441171 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 21 12:01:22.441171 waagent[1912]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 21 12:01:22.441171 waagent[1912]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 21 12:01:22.441564 waagent[1912]: 2026-04-21T12:01:22.441418Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 21 12:01:22.449737 waagent[1912]: 2026-04-21T12:01:22.449680Z INFO MonitorHandler ExtHandler Network interfaces: Apr 21 12:01:22.449737 waagent[1912]: Executing ['ip', '-a', '-o', 'link']: Apr 21 12:01:22.449737 waagent[1912]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 21 12:01:22.449737 waagent[1912]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:de:23:85 brd ff:ff:ff:ff:ff:ff Apr 21 12:01:22.449737 waagent[1912]: 3: enP20914s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:de:23:85 brd ff:ff:ff:ff:ff:ff\ altname enP20914p0s2 Apr 21 12:01:22.449737 waagent[1912]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 21 12:01:22.449737 waagent[1912]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 21 12:01:22.449737 waagent[1912]: 2: eth0 inet 10.0.0.17/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 21 12:01:22.449737 waagent[1912]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 21 12:01:22.449737 waagent[1912]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 21 12:01:22.449737 waagent[1912]: 2: eth0 inet6 fe80::20d:3aff:fede:2385/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 21 12:01:30.480321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 12:01:30.486077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:01:30.589841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:01:30.594213 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:01:30.629154 kubelet[2163]: E0421 12:01:30.629098 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:01:30.631502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:01:30.631717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:01:31.409438 chronyd[1751]: Selected source PHC0 Apr 21 12:01:40.730558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 12:01:40.736158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:01:40.857406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:01:40.868146 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:01:41.515217 kubelet[2178]: E0421 12:01:41.515139 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:01:41.517924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:01:41.518139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:01:44.129284 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 12:01:44.134132 systemd[1]: Started sshd@0-10.0.0.17:22-20.229.252.112:50132.service - OpenSSH per-connection server daemon (20.229.252.112:50132). Apr 21 12:01:44.352580 sshd[2186]: Accepted publickey for core from 20.229.252.112 port 50132 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:01:44.354097 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:01:44.358720 systemd-logind[1709]: New session 3 of user core. Apr 21 12:01:44.364048 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 12:01:44.495241 systemd[1]: Started sshd@1-10.0.0.17:22-20.229.252.112:50142.service - OpenSSH per-connection server daemon (20.229.252.112:50142). Apr 21 12:01:44.621615 sshd[2191]: Accepted publickey for core from 20.229.252.112 port 50142 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:01:44.623059 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:01:44.627580 systemd-logind[1709]: New session 4 of user core. Apr 21 12:01:44.637979 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 12:01:44.735903 sshd[2191]: pam_unix(sshd:session): session closed for user core Apr 21 12:01:44.739426 systemd[1]: sshd@1-10.0.0.17:22-20.229.252.112:50142.service: Deactivated successfully. Apr 21 12:01:44.741068 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 12:01:44.741980 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Apr 21 12:01:44.742969 systemd-logind[1709]: Removed session 4. Apr 21 12:01:44.758240 systemd[1]: Started sshd@2-10.0.0.17:22-20.229.252.112:50152.service - OpenSSH per-connection server daemon (20.229.252.112:50152). Apr 21 12:01:44.882661 sshd[2198]: Accepted publickey for core from 20.229.252.112 port 50152 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:01:44.884134 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:01:44.888665 systemd-logind[1709]: New session 5 of user core. Apr 21 12:01:44.898024 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 12:01:44.991136 sshd[2198]: pam_unix(sshd:session): session closed for user core Apr 21 12:01:44.994068 systemd[1]: sshd@2-10.0.0.17:22-20.229.252.112:50152.service: Deactivated successfully. Apr 21 12:01:44.996183 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 12:01:44.997856 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Apr 21 12:01:44.999125 systemd-logind[1709]: Removed session 5. Apr 21 12:01:45.021481 systemd[1]: Started sshd@3-10.0.0.17:22-20.229.252.112:54706.service - OpenSSH per-connection server daemon (20.229.252.112:54706). Apr 21 12:01:45.146852 sshd[2205]: Accepted publickey for core from 20.229.252.112 port 54706 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:01:45.148296 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:01:45.152344 systemd-logind[1709]: New session 6 of user core. Apr 21 12:01:45.157984 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 12:01:45.257711 sshd[2205]: pam_unix(sshd:session): session closed for user core Apr 21 12:01:45.260390 systemd[1]: sshd@3-10.0.0.17:22-20.229.252.112:54706.service: Deactivated successfully. Apr 21 12:01:45.262432 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 12:01:45.263932 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Apr 21 12:01:45.264934 systemd-logind[1709]: Removed session 6. Apr 21 12:01:45.287765 systemd[1]: Started sshd@4-10.0.0.17:22-20.229.252.112:54712.service - OpenSSH per-connection server daemon (20.229.252.112:54712). Apr 21 12:01:45.409641 sshd[2212]: Accepted publickey for core from 20.229.252.112 port 54712 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:01:45.410254 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:01:45.415926 systemd-logind[1709]: New session 7 of user core. Apr 21 12:01:45.422009 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 12:01:45.638615 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 12:01:45.639244 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:01:45.672295 sudo[2215]: pam_unix(sudo:session): session closed for user root Apr 21 12:01:45.688906 sshd[2212]: pam_unix(sshd:session): session closed for user core Apr 21 12:01:45.692206 systemd[1]: sshd@4-10.0.0.17:22-20.229.252.112:54712.service: Deactivated successfully. Apr 21 12:01:45.694328 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 12:01:45.696035 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Apr 21 12:01:45.697071 systemd-logind[1709]: Removed session 7. Apr 21 12:01:45.715671 systemd[1]: Started sshd@5-10.0.0.17:22-20.229.252.112:54718.service - OpenSSH per-connection server daemon (20.229.252.112:54718). Apr 21 12:01:45.837810 sshd[2220]: Accepted publickey for core from 20.229.252.112 port 54718 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:01:45.839325 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:01:45.844330 systemd-logind[1709]: New session 8 of user core. Apr 21 12:01:45.849983 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 12:01:45.931887 sudo[2224]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 12:01:45.932264 sudo[2224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:01:45.936214 sudo[2224]: pam_unix(sudo:session): session closed for user root Apr 21 12:01:45.941409 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 12:01:45.941948 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:01:45.954152 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 12:01:45.957562 auditctl[2227]: No rules Apr 21 12:01:45.957956 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 12:01:45.958160 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 12:01:45.961043 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 12:01:45.994507 augenrules[2245]: No rules Apr 21 12:01:45.996042 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 12:01:45.997215 sudo[2223]: pam_unix(sudo:session): session closed for user root Apr 21 12:01:46.013919 sshd[2220]: pam_unix(sshd:session): session closed for user core Apr 21 12:01:46.017225 systemd[1]: sshd@5-10.0.0.17:22-20.229.252.112:54718.service: Deactivated successfully. Apr 21 12:01:46.019277 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 12:01:46.020908 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Apr 21 12:01:46.021927 systemd-logind[1709]: Removed session 8. Apr 21 12:01:46.039672 systemd[1]: Started sshd@6-10.0.0.17:22-20.229.252.112:54732.service - OpenSSH per-connection server daemon (20.229.252.112:54732). Apr 21 12:01:46.159643 sshd[2253]: Accepted publickey for core from 20.229.252.112 port 54732 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:01:46.161086 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:01:46.165601 systemd-logind[1709]: New session 9 of user core. Apr 21 12:01:46.172019 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 12:01:46.257896 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 12:01:46.258435 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 12:01:48.215084 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 21 12:01:49.580151 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 12:01:49.580288 (dockerd)[2273]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 12:01:51.730573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 21 12:01:51.739063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:01:53.760544 update_engine[1710]: I20260421 12:01:53.759886 1710 update_attempter.cc:509] Updating boot flags... Apr 21 12:01:53.972755 dockerd[2273]: time="2026-04-21T12:01:53.972696109Z" level=info msg="Starting up" Apr 21 12:01:54.300854 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2296) Apr 21 12:01:54.738867 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2299) Apr 21 12:01:54.857328 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2299) Apr 21 12:01:58.016398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:01:58.031087 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:01:58.070982 kubelet[2393]: E0421 12:01:58.070924 2393 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:01:58.073404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:01:58.073630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:00.305421 dockerd[2273]: time="2026-04-21T12:02:00.305143312Z" level=info msg="Loading containers: start." Apr 21 12:02:00.813851 kernel: Initializing XFRM netlink socket Apr 21 12:02:00.981423 systemd-networkd[1353]: docker0: Link UP Apr 21 12:02:01.170220 dockerd[2273]: time="2026-04-21T12:02:01.170089712Z" level=info msg="Loading containers: done." Apr 21 12:02:01.562601 dockerd[2273]: time="2026-04-21T12:02:01.562542200Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 12:02:01.563410 dockerd[2273]: time="2026-04-21T12:02:01.562684802Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 12:02:01.563410 dockerd[2273]: time="2026-04-21T12:02:01.562857405Z" level=info msg="Daemon has completed initialization" Apr 21 12:02:01.633771 dockerd[2273]: time="2026-04-21T12:02:01.633083456Z" level=info msg="API listen on /run/docker.sock" Apr 21 12:02:01.633517 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 12:02:02.260853 containerd[1737]: time="2026-04-21T12:02:02.260298723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 21 12:02:03.085179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782169697.mount: Deactivated successfully. Apr 21 12:02:05.470362 containerd[1737]: time="2026-04-21T12:02:05.470305728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:05.473894 containerd[1737]: time="2026-04-21T12:02:05.473848192Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100522" Apr 21 12:02:05.479023 containerd[1737]: time="2026-04-21T12:02:05.478967384Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:05.484446 containerd[1737]: time="2026-04-21T12:02:05.484392182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:05.485695 containerd[1737]: time="2026-04-21T12:02:05.485481501Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 3.225137777s" Apr 21 12:02:05.485695 containerd[1737]: time="2026-04-21T12:02:05.485525302Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 21 12:02:05.486288 containerd[1737]: time="2026-04-21T12:02:05.486265816Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 21 12:02:08.230435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 21 12:02:08.239062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:08.408998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:08.418564 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:02:08.483649 kubelet[2589]: E0421 12:02:08.483137 2589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:02:08.487440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:02:08.487641 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:12.559052 containerd[1737]: time="2026-04-21T12:02:12.558987237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:12.565204 containerd[1737]: time="2026-04-21T12:02:12.565148852Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252746" Apr 21 12:02:12.569858 containerd[1737]: time="2026-04-21T12:02:12.569780538Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:12.576125 containerd[1737]: time="2026-04-21T12:02:12.576086956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:12.577561 containerd[1737]: time="2026-04-21T12:02:12.577181176Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 7.090728257s" Apr 21 12:02:12.577561 containerd[1737]: time="2026-04-21T12:02:12.577225077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 21 12:02:12.578162 containerd[1737]: time="2026-04-21T12:02:12.578134194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 21 12:02:18.107106 containerd[1737]: time="2026-04-21T12:02:18.107045660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:18.157317 containerd[1737]: time="2026-04-21T12:02:18.157253298Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810899" Apr 21 12:02:18.161458 containerd[1737]: time="2026-04-21T12:02:18.161384775Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:18.211621 containerd[1737]: time="2026-04-21T12:02:18.211364908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:18.212813 containerd[1737]: time="2026-04-21T12:02:18.212627432Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 5.634458637s" Apr 21 12:02:18.212813 containerd[1737]: time="2026-04-21T12:02:18.212670933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 21 12:02:18.213544 containerd[1737]: time="2026-04-21T12:02:18.213415647Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 21 12:02:18.730521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 21 12:02:18.736094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:21.830985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:21.839245 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:02:21.876491 kubelet[2608]: E0421 12:02:21.876427 2608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:02:21.878917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:02:21.879141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:23.876559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2159612235.mount: Deactivated successfully. Apr 21 12:02:24.299445 containerd[1737]: time="2026-04-21T12:02:24.299382359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:24.306309 containerd[1737]: time="2026-04-21T12:02:24.306230279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972962" Apr 21 12:02:24.311442 containerd[1737]: time="2026-04-21T12:02:24.311364469Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:24.316151 containerd[1737]: time="2026-04-21T12:02:24.316079751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:24.316838 containerd[1737]: time="2026-04-21T12:02:24.316785464Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 6.103336616s" Apr 21 12:02:24.316949 containerd[1737]: time="2026-04-21T12:02:24.316846165Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 21 12:02:24.317674 containerd[1737]: time="2026-04-21T12:02:24.317451875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 21 12:02:25.003656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719355417.mount: Deactivated successfully. Apr 21 12:02:30.936911 containerd[1737]: time="2026-04-21T12:02:30.936856371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:30.946468 containerd[1737]: time="2026-04-21T12:02:30.946314741Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Apr 21 12:02:30.954967 containerd[1737]: time="2026-04-21T12:02:30.954905595Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:30.965224 containerd[1737]: time="2026-04-21T12:02:30.965067578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:30.966533 containerd[1737]: time="2026-04-21T12:02:30.966121197Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 6.64863222s" Apr 21 12:02:30.966533 containerd[1737]: time="2026-04-21T12:02:30.966164897Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 21 12:02:30.967216 containerd[1737]: time="2026-04-21T12:02:30.967187916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 12:02:31.677776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136230728.mount: Deactivated successfully. Apr 21 12:02:31.853043 containerd[1737]: time="2026-04-21T12:02:31.852983321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:31.919012 containerd[1737]: time="2026-04-21T12:02:31.918922405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Apr 21 12:02:31.928126 containerd[1737]: time="2026-04-21T12:02:31.927986168Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:31.975330 containerd[1737]: time="2026-04-21T12:02:31.975256717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:31.976977 containerd[1737]: time="2026-04-21T12:02:31.976202034Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.008977118s" Apr 21 12:02:31.976977 containerd[1737]: time="2026-04-21T12:02:31.976246035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 12:02:31.976977 containerd[1737]: time="2026-04-21T12:02:31.976734643Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 21 12:02:31.980214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 21 12:02:31.985070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:32.103198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:32.107975 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:02:32.730762 kubelet[2687]: E0421 12:02:32.730662 2687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:02:32.733139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:02:32.733362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:38.683555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433066199.mount: Deactivated successfully. Apr 21 12:02:42.980335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 21 12:02:42.986387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:43.329287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:43.333966 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 12:02:43.370012 kubelet[2718]: E0421 12:02:43.369918 2718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 12:02:43.372353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 12:02:43.372567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 12:02:45.117890 containerd[1737]: time="2026-04-21T12:02:45.117817521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:45.121174 containerd[1737]: time="2026-04-21T12:02:45.121106082Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874825" Apr 21 12:02:45.125174 containerd[1737]: time="2026-04-21T12:02:45.125118555Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:45.130705 containerd[1737]: time="2026-04-21T12:02:45.130655957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:02:45.131955 containerd[1737]: time="2026-04-21T12:02:45.131731577Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 13.154967633s" Apr 21 12:02:45.131955 containerd[1737]: time="2026-04-21T12:02:45.131773078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 21 12:02:48.952292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:48.958128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:48.992225 systemd[1]: Reloading requested from client PID 2798 ('systemctl') (unit session-9.scope)... Apr 21 12:02:48.992243 systemd[1]: Reloading... Apr 21 12:02:49.090861 zram_generator::config[2835]: No configuration found. Apr 21 12:02:49.228983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:02:49.328374 systemd[1]: Reloading finished in 335 ms. Apr 21 12:02:49.373214 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 12:02:49.373356 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 12:02:49.373650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:49.380251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:02:53.385663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:02:53.391732 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 12:02:53.430608 kubelet[2905]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 12:02:53.430608 kubelet[2905]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 12:02:53.431071 kubelet[2905]: I0421 12:02:53.430661 2905 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 12:02:54.157026 kubelet[2905]: I0421 12:02:54.156979 2905 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 12:02:54.157026 kubelet[2905]: I0421 12:02:54.157010 2905 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 12:02:54.159598 kubelet[2905]: I0421 12:02:54.159562 2905 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 12:02:54.159598 kubelet[2905]: I0421 12:02:54.159599 2905 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 12:02:54.159916 kubelet[2905]: I0421 12:02:54.159897 2905 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 12:02:55.814745 kubelet[2905]: E0421 12:02:55.814130 2905 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 12:02:55.818612 kubelet[2905]: I0421 12:02:55.818179 2905 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 12:02:55.822388 kubelet[2905]: E0421 12:02:55.822348 2905 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 12:02:55.822515 kubelet[2905]: I0421 12:02:55.822425 2905 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 12:02:55.826899 kubelet[2905]: I0421 12:02:55.826872 2905 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 12:02:55.828451 kubelet[2905]: I0421 12:02:55.828417 2905 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 12:02:55.828641 kubelet[2905]: I0421 12:02:55.828451 2905 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-a89817d5a7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 12:02:55.828786 kubelet[2905]: I0421 12:02:55.828642 2905 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 12:02:55.828786 kubelet[2905]: I0421 12:02:55.828657 2905 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 12:02:55.828786 kubelet[2905]: I0421 12:02:55.828777 2905 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 12:02:55.839786 kubelet[2905]: I0421 12:02:55.839745 2905 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:02:55.840021 kubelet[2905]: I0421 12:02:55.839993 2905 kubelet.go:475] "Attempting to sync node with API server" Apr 21 12:02:55.840086 kubelet[2905]: I0421 12:02:55.840029 2905 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 12:02:55.840086 kubelet[2905]: I0421 12:02:55.840061 2905 kubelet.go:387] "Adding apiserver pod source" Apr 21 12:02:55.840086 kubelet[2905]: I0421 12:02:55.840080 2905 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 12:02:55.843642 kubelet[2905]: E0421 12:02:55.843123 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 12:02:55.843642 kubelet[2905]: E0421 12:02:55.843253 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-a89817d5a7&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 12:02:55.843834 kubelet[2905]: I0421 12:02:55.843799 2905 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 12:02:55.845184 kubelet[2905]: I0421 12:02:55.844388 2905 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 12:02:55.845184 kubelet[2905]: I0421 12:02:55.844435 2905 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 12:02:55.845184 kubelet[2905]: W0421 12:02:55.844493 2905 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 12:02:55.847692 kubelet[2905]: I0421 12:02:55.847669 2905 server.go:1262] "Started kubelet" Apr 21 12:02:55.849578 kubelet[2905]: I0421 12:02:55.848977 2905 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 12:02:55.849578 kubelet[2905]: I0421 12:02:55.849031 2905 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 12:02:55.849578 kubelet[2905]: I0421 12:02:55.849374 2905 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 12:02:55.849578 kubelet[2905]: I0421 12:02:55.849447 2905 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 12:02:55.849578 kubelet[2905]: I0421 12:02:55.849447 2905 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 12:02:55.851623 kubelet[2905]: I0421 12:02:55.851589 2905 server.go:310] "Adding debug handlers to kubelet server" Apr 21 12:02:55.861614 kubelet[2905]: I0421 12:02:55.861041 2905 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 12:02:55.861614 kubelet[2905]: I0421 12:02:55.861210 2905 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 12:02:55.861614 kubelet[2905]: E0421 12:02:55.861237 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:02:55.864066 kubelet[2905]: E0421 12:02:55.864022 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-a89817d5a7?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="200ms" Apr 21 12:02:55.864542 kubelet[2905]: I0421 12:02:55.864526 2905 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 12:02:55.864707 kubelet[2905]: I0421 12:02:55.864675 2905 reconciler.go:29] "Reconciler: start to sync state" Apr 21 12:02:55.865854 kubelet[2905]: E0421 12:02:55.864111 2905 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.7-a-a89817d5a7.18a85d9b97e988eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.7-a-a89817d5a7,UID:ci-4081.3.7-a-a89817d5a7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.7-a-a89817d5a7,},FirstTimestamp:2026-04-21 12:02:55.847639275 +0000 UTC m=+2.452631484,LastTimestamp:2026-04-21 12:02:55.847639275 +0000 UTC m=+2.452631484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.7-a-a89817d5a7,}" Apr 21 12:02:55.866916 kubelet[2905]: I0421 12:02:55.866389 2905 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 12:02:55.868904 kubelet[2905]: E0421 12:02:55.868795 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 12:02:55.868981 kubelet[2905]: I0421 12:02:55.868962 2905 factory.go:223] Registration of the containerd container factory successfully Apr 21 12:02:55.869035 kubelet[2905]: I0421 12:02:55.868985 2905 factory.go:223] Registration of the systemd container factory successfully Apr 21 12:02:55.879748 kubelet[2905]: E0421 12:02:55.879713 2905 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 12:02:55.909758 kubelet[2905]: I0421 12:02:55.909144 2905 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 12:02:55.909758 kubelet[2905]: I0421 12:02:55.909163 2905 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 12:02:55.909758 kubelet[2905]: I0421 12:02:55.909180 2905 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:02:55.916455 kubelet[2905]: I0421 12:02:55.916425 2905 policy_none.go:49] "None policy: Start" Apr 21 12:02:55.916455 kubelet[2905]: I0421 12:02:55.916461 2905 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 12:02:55.916618 kubelet[2905]: I0421 12:02:55.916477 2905 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 12:02:55.921902 kubelet[2905]: I0421 12:02:55.921874 2905 policy_none.go:47] "Start" Apr 21 12:02:55.926324 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 12:02:55.941533 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 12:02:55.947732 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 12:02:55.948246 kubelet[2905]: I0421 12:02:55.948211 2905 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 12:02:55.949772 kubelet[2905]: I0421 12:02:55.949744 2905 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 12:02:55.949772 kubelet[2905]: I0421 12:02:55.949772 2905 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 12:02:55.949915 kubelet[2905]: I0421 12:02:55.949801 2905 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 12:02:55.949915 kubelet[2905]: E0421 12:02:55.949864 2905 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 12:02:55.954317 kubelet[2905]: E0421 12:02:55.954172 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 12:02:55.956327 kubelet[2905]: E0421 12:02:55.956006 2905 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 12:02:55.956498 kubelet[2905]: I0421 12:02:55.956478 2905 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 12:02:55.956580 kubelet[2905]: I0421 12:02:55.956498 2905 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 12:02:55.956815 kubelet[2905]: I0421 12:02:55.956740 2905 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 12:02:55.958593 kubelet[2905]: E0421 12:02:55.958303 2905 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 12:02:55.958593 kubelet[2905]: E0421 12:02:55.958344 2905 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:02:56.065120 kubelet[2905]: I0421 12:02:56.065001 2905 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.065505 kubelet[2905]: I0421 12:02:56.065370 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.065505 kubelet[2905]: I0421 12:02:56.065402 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.065505 kubelet[2905]: I0421 12:02:56.065442 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df95b5ed73a9070b7d11ea4510c8d36a-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-a89817d5a7\" (UID: \"df95b5ed73a9070b7d11ea4510c8d36a\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.065505 kubelet[2905]: I0421 12:02:56.065460 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df95b5ed73a9070b7d11ea4510c8d36a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-a89817d5a7\" (UID: \"df95b5ed73a9070b7d11ea4510c8d36a\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.066189 kubelet[2905]: I0421 12:02:56.065481 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df95b5ed73a9070b7d11ea4510c8d36a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-a89817d5a7\" (UID: \"df95b5ed73a9070b7d11ea4510c8d36a\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.066189 kubelet[2905]: I0421 12:02:56.065765 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.066189 kubelet[2905]: I0421 12:02:56.065792 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.066189 kubelet[2905]: I0421 12:02:56.065854 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.066189 kubelet[2905]: E0421 12:02:56.066061 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-a89817d5a7?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="400ms" Apr 21 12:02:56.066443 kubelet[2905]: E0421 12:02:56.066127 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.073687 systemd[1]: Created slice kubepods-burstable-poddf95b5ed73a9070b7d11ea4510c8d36a.slice - libcontainer container kubepods-burstable-poddf95b5ed73a9070b7d11ea4510c8d36a.slice. Apr 21 12:02:56.078467 kubelet[2905]: E0421 12:02:56.078442 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.083493 systemd[1]: Created slice kubepods-burstable-pod89fd8f600e3b16a8917ec5a6213ef796.slice - libcontainer container kubepods-burstable-pod89fd8f600e3b16a8917ec5a6213ef796.slice. Apr 21 12:02:56.088051 kubelet[2905]: E0421 12:02:56.088026 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.092052 systemd[1]: Created slice kubepods-burstable-poda8dd231f8042790dd37827eda534d57b.slice - libcontainer container kubepods-burstable-poda8dd231f8042790dd37827eda534d57b.slice. Apr 21 12:02:56.093739 kubelet[2905]: E0421 12:02:56.093716 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.166258 kubelet[2905]: I0421 12:02:56.166071 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8dd231f8042790dd37827eda534d57b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-a89817d5a7\" (UID: \"a8dd231f8042790dd37827eda534d57b\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.180694 kubelet[2905]: E0421 12:02:56.180592 2905 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.7-a-a89817d5a7.18a85d9b97e988eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.7-a-a89817d5a7,UID:ci-4081.3.7-a-a89817d5a7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.7-a-a89817d5a7,},FirstTimestamp:2026-04-21 12:02:55.847639275 +0000 UTC m=+2.452631484,LastTimestamp:2026-04-21 12:02:55.847639275 +0000 UTC m=+2.452631484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.7-a-a89817d5a7,}" Apr 21 12:02:56.267995 kubelet[2905]: I0421 12:02:56.267959 2905 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.268402 kubelet[2905]: E0421 12:02:56.268371 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.386563 containerd[1737]: time="2026-04-21T12:02:56.386440750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-a89817d5a7,Uid:df95b5ed73a9070b7d11ea4510c8d36a,Namespace:kube-system,Attempt:0,}" Apr 21 12:02:56.394718 containerd[1737]: time="2026-04-21T12:02:56.394679596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-a89817d5a7,Uid:89fd8f600e3b16a8917ec5a6213ef796,Namespace:kube-system,Attempt:0,}" Apr 21 12:02:56.400873 containerd[1737]: time="2026-04-21T12:02:56.400822306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-a89817d5a7,Uid:a8dd231f8042790dd37827eda534d57b,Namespace:kube-system,Attempt:0,}" Apr 21 12:02:56.467302 kubelet[2905]: E0421 12:02:56.467066 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-a89817d5a7?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="800ms" Apr 21 12:02:56.647385 kubelet[2905]: E0421 12:02:56.647255 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 12:02:56.670341 kubelet[2905]: I0421 12:02:56.670312 2905 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.670662 kubelet[2905]: E0421 12:02:56.670631 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:56.701192 kubelet[2905]: E0421 12:02:56.701153 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 12:02:56.906486 kubelet[2905]: E0421 12:02:56.905055 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-a89817d5a7&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 12:02:56.906486 kubelet[2905]: E0421 12:02:56.906260 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 12:02:56.931132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384042380.mount: Deactivated successfully. Apr 21 12:02:56.967158 containerd[1737]: time="2026-04-21T12:02:56.967106468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:56.970931 containerd[1737]: time="2026-04-21T12:02:56.970886836Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:56.974533 containerd[1737]: time="2026-04-21T12:02:56.974484100Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 21 12:02:57.009386 containerd[1737]: time="2026-04-21T12:02:57.009319519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 12:02:57.054388 containerd[1737]: time="2026-04-21T12:02:57.054322218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:57.058094 containerd[1737]: time="2026-04-21T12:02:57.057993384Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:57.109727 containerd[1737]: time="2026-04-21T12:02:57.109608301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 12:02:57.166569 containerd[1737]: time="2026-04-21T12:02:57.166391510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 12:02:57.167957 containerd[1737]: time="2026-04-21T12:02:57.167697233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 772.891835ms" Apr 21 12:02:57.169404 containerd[1737]: time="2026-04-21T12:02:57.169368563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 768.461056ms" Apr 21 12:02:57.170463 containerd[1737]: time="2026-04-21T12:02:57.170301279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 783.771528ms" Apr 21 12:02:57.268671 kubelet[2905]: E0421 12:02:57.268622 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-a89817d5a7?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="1.6s" Apr 21 12:02:57.512786 kubelet[2905]: I0421 12:02:57.473176 2905 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:57.512786 kubelet[2905]: E0421 12:02:57.473930 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:57.934317 kubelet[2905]: E0421 12:02:57.934189 2905 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 12:02:58.642999 containerd[1737]: time="2026-04-21T12:02:58.642905248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:58.643780 containerd[1737]: time="2026-04-21T12:02:58.643092451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:58.643780 containerd[1737]: time="2026-04-21T12:02:58.643622460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:58.644038 containerd[1737]: time="2026-04-21T12:02:58.643977967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:58.648104 containerd[1737]: time="2026-04-21T12:02:58.646992720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:58.648104 containerd[1737]: time="2026-04-21T12:02:58.647055021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:58.648104 containerd[1737]: time="2026-04-21T12:02:58.647072822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:58.648104 containerd[1737]: time="2026-04-21T12:02:58.647156923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:58.657787 containerd[1737]: time="2026-04-21T12:02:58.657079899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:02:58.657787 containerd[1737]: time="2026-04-21T12:02:58.657138300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:02:58.657787 containerd[1737]: time="2026-04-21T12:02:58.657155801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:58.657787 containerd[1737]: time="2026-04-21T12:02:58.657244402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:02:58.689026 systemd[1]: Started cri-containerd-369c0d11cb05585d7cee0e59dde1b2fc1df9dc33d0d7d7166781a68576dafdaa.scope - libcontainer container 369c0d11cb05585d7cee0e59dde1b2fc1df9dc33d0d7d7166781a68576dafdaa. Apr 21 12:02:58.721384 systemd[1]: Started cri-containerd-658666ce9b4a7be320e728b147399c926726cae6d2f6d44d31a9ec82828a71ea.scope - libcontainer container 658666ce9b4a7be320e728b147399c926726cae6d2f6d44d31a9ec82828a71ea. Apr 21 12:02:58.724813 systemd[1]: Started cri-containerd-84b5b611aec062f1e24a88a76cc8542116591e08d88f07c46b0a5dd317d92797.scope - libcontainer container 84b5b611aec062f1e24a88a76cc8542116591e08d88f07c46b0a5dd317d92797. Apr 21 12:02:58.768007 containerd[1737]: time="2026-04-21T12:02:58.767580663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-a89817d5a7,Uid:89fd8f600e3b16a8917ec5a6213ef796,Namespace:kube-system,Attempt:0,} returns sandbox id \"369c0d11cb05585d7cee0e59dde1b2fc1df9dc33d0d7d7166781a68576dafdaa\"" Apr 21 12:02:58.780844 containerd[1737]: time="2026-04-21T12:02:58.780726497Z" level=info msg="CreateContainer within sandbox \"369c0d11cb05585d7cee0e59dde1b2fc1df9dc33d0d7d7166781a68576dafdaa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 12:02:58.795415 containerd[1737]: time="2026-04-21T12:02:58.795359157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-a89817d5a7,Uid:df95b5ed73a9070b7d11ea4510c8d36a,Namespace:kube-system,Attempt:0,} returns sandbox id \"658666ce9b4a7be320e728b147399c926726cae6d2f6d44d31a9ec82828a71ea\"" Apr 21 12:02:58.808259 containerd[1737]: time="2026-04-21T12:02:58.808160784Z" level=info msg="CreateContainer within sandbox \"658666ce9b4a7be320e728b147399c926726cae6d2f6d44d31a9ec82828a71ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 12:02:58.822208 containerd[1737]: time="2026-04-21T12:02:58.822120832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-a89817d5a7,Uid:a8dd231f8042790dd37827eda534d57b,Namespace:kube-system,Attempt:0,} returns sandbox id \"84b5b611aec062f1e24a88a76cc8542116591e08d88f07c46b0a5dd317d92797\"" Apr 21 12:02:58.824594 containerd[1737]: time="2026-04-21T12:02:58.824464874Z" level=info msg="CreateContainer within sandbox \"369c0d11cb05585d7cee0e59dde1b2fc1df9dc33d0d7d7166781a68576dafdaa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"882b9b3f9a537e5444281cd02bc461864711f4e783b0013aaf09ec8a47f16f76\"" Apr 21 12:02:58.825973 containerd[1737]: time="2026-04-21T12:02:58.825946700Z" level=info msg="StartContainer for \"882b9b3f9a537e5444281cd02bc461864711f4e783b0013aaf09ec8a47f16f76\"" Apr 21 12:02:58.833720 containerd[1737]: time="2026-04-21T12:02:58.833525535Z" level=info msg="CreateContainer within sandbox \"84b5b611aec062f1e24a88a76cc8542116591e08d88f07c46b0a5dd317d92797\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 12:02:58.857084 systemd[1]: Started cri-containerd-882b9b3f9a537e5444281cd02bc461864711f4e783b0013aaf09ec8a47f16f76.scope - libcontainer container 882b9b3f9a537e5444281cd02bc461864711f4e783b0013aaf09ec8a47f16f76. Apr 21 12:02:58.869643 kubelet[2905]: E0421 12:02:58.869549 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-a89817d5a7?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="3.2s" Apr 21 12:02:58.906944 kubelet[2905]: E0421 12:02:58.906799 2905 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-a89817d5a7&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 12:02:58.958094 containerd[1737]: time="2026-04-21T12:02:58.957749642Z" level=info msg="StartContainer for \"882b9b3f9a537e5444281cd02bc461864711f4e783b0013aaf09ec8a47f16f76\" returns successfully" Apr 21 12:02:58.958094 containerd[1737]: time="2026-04-21T12:02:58.957991347Z" level=info msg="CreateContainer within sandbox \"658666ce9b4a7be320e728b147399c926726cae6d2f6d44d31a9ec82828a71ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c6df9cfb1ea333939fdfb89413d44844d327f39a84fd2feb96b3bbb238e986e\"" Apr 21 12:02:58.959206 containerd[1737]: time="2026-04-21T12:02:58.959145267Z" level=info msg="StartContainer for \"3c6df9cfb1ea333939fdfb89413d44844d327f39a84fd2feb96b3bbb238e986e\"" Apr 21 12:02:59.007378 systemd[1]: Started cri-containerd-3c6df9cfb1ea333939fdfb89413d44844d327f39a84fd2feb96b3bbb238e986e.scope - libcontainer container 3c6df9cfb1ea333939fdfb89413d44844d327f39a84fd2feb96b3bbb238e986e. Apr 21 12:02:59.025403 kubelet[2905]: E0421 12:02:59.025367 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:59.077665 kubelet[2905]: I0421 12:02:59.077638 2905 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:59.110619 kubelet[2905]: E0421 12:02:59.078796 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:02:59.117850 containerd[1737]: time="2026-04-21T12:02:59.116858070Z" level=info msg="CreateContainer within sandbox \"84b5b611aec062f1e24a88a76cc8542116591e08d88f07c46b0a5dd317d92797\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8100cb3956d26520d41b7168b09b52b10a4862c108aeec340c80fde3085c136f\"" Apr 21 12:02:59.117850 containerd[1737]: time="2026-04-21T12:02:59.116979372Z" level=info msg="StartContainer for \"3c6df9cfb1ea333939fdfb89413d44844d327f39a84fd2feb96b3bbb238e986e\" returns successfully" Apr 21 12:02:59.119098 containerd[1737]: time="2026-04-21T12:02:59.118944007Z" level=info msg="StartContainer for \"8100cb3956d26520d41b7168b09b52b10a4862c108aeec340c80fde3085c136f\"" Apr 21 12:02:59.160035 systemd[1]: Started cri-containerd-8100cb3956d26520d41b7168b09b52b10a4862c108aeec340c80fde3085c136f.scope - libcontainer container 8100cb3956d26520d41b7168b09b52b10a4862c108aeec340c80fde3085c136f. Apr 21 12:02:59.303797 containerd[1737]: time="2026-04-21T12:02:59.302439668Z" level=info msg="StartContainer for \"8100cb3956d26520d41b7168b09b52b10a4862c108aeec340c80fde3085c136f\" returns successfully" Apr 21 12:03:00.028408 kubelet[2905]: E0421 12:03:00.027997 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:00.030300 kubelet[2905]: E0421 12:03:00.030125 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:01.030720 kubelet[2905]: E0421 12:03:01.030687 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:01.031186 kubelet[2905]: E0421 12:03:01.031056 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:01.094725 kubelet[2905]: E0421 12:03:01.094681 2905 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.7-a-a89817d5a7" not found Apr 21 12:03:01.191482 kubelet[2905]: E0421 12:03:01.191446 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:01.574083 kubelet[2905]: E0421 12:03:01.574023 2905 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.7-a-a89817d5a7" not found Apr 21 12:03:02.033128 kubelet[2905]: E0421 12:03:02.032922 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:02.033128 kubelet[2905]: E0421 12:03:02.032993 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:02.076033 kubelet[2905]: E0421 12:03:02.075978 2905 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:02.160869 kubelet[2905]: E0421 12:03:02.160814 2905 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.7-a-a89817d5a7" not found Apr 21 12:03:02.281750 kubelet[2905]: I0421 12:03:02.281703 2905 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:02.299810 kubelet[2905]: I0421 12:03:02.299694 2905 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:02.299810 kubelet[2905]: E0421 12:03:02.299733 2905 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.7-a-a89817d5a7\": node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:02.313450 kubelet[2905]: E0421 12:03:02.313175 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:02.413616 kubelet[2905]: E0421 12:03:02.413566 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:02.514513 kubelet[2905]: E0421 12:03:02.514450 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:02.615167 kubelet[2905]: E0421 12:03:02.615038 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:02.715809 kubelet[2905]: E0421 12:03:02.715757 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:02.815935 kubelet[2905]: E0421 12:03:02.815888 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:02.916124 kubelet[2905]: E0421 12:03:02.916000 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:03.017127 kubelet[2905]: E0421 12:03:03.017081 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:03.034558 kubelet[2905]: E0421 12:03:03.034295 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:03.034558 kubelet[2905]: E0421 12:03:03.034388 2905 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:03.117546 kubelet[2905]: E0421 12:03:03.117493 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:03.211804 systemd[1]: Reloading requested from client PID 3187 ('systemctl') (unit session-9.scope)... Apr 21 12:03:03.211820 systemd[1]: Reloading... Apr 21 12:03:03.217924 kubelet[2905]: E0421 12:03:03.217858 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:03.308265 zram_generator::config[3227]: No configuration found. Apr 21 12:03:03.318715 kubelet[2905]: E0421 12:03:03.318671 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:03.419124 kubelet[2905]: E0421 12:03:03.419075 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:03.436839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 12:03:03.520268 kubelet[2905]: E0421 12:03:03.520215 2905 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-a89817d5a7\" not found" Apr 21 12:03:03.530479 systemd[1]: Reloading finished in 318 ms. Apr 21 12:03:03.575755 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:03:03.586391 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 12:03:03.586764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:03:03.586918 systemd[1]: kubelet.service: Consumed 1.176s CPU time, 127.1M memory peak, 0B memory swap peak. Apr 21 12:03:03.594086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 12:03:03.868983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 12:03:03.869381 (kubelet)[3294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 12:03:03.917857 kubelet[3294]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 12:03:03.917857 kubelet[3294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 12:03:03.917857 kubelet[3294]: I0421 12:03:03.916633 3294 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 12:03:03.922084 kubelet[3294]: I0421 12:03:03.922059 3294 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 12:03:03.922275 kubelet[3294]: I0421 12:03:03.922202 3294 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 12:03:03.922275 kubelet[3294]: I0421 12:03:03.922230 3294 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 12:03:03.922275 kubelet[3294]: I0421 12:03:03.922235 3294 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 12:03:03.922750 kubelet[3294]: I0421 12:03:03.922514 3294 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 12:03:03.924356 kubelet[3294]: I0421 12:03:03.924340 3294 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 12:03:03.929909 kubelet[3294]: I0421 12:03:03.929882 3294 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 12:03:03.932903 kubelet[3294]: E0421 12:03:03.932817 3294 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 12:03:03.933049 kubelet[3294]: I0421 12:03:03.933038 3294 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 12:03:03.936995 kubelet[3294]: I0421 12:03:03.936969 3294 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 12:03:03.938660 kubelet[3294]: I0421 12:03:03.937310 3294 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 12:03:03.938660 kubelet[3294]: I0421 12:03:03.937343 3294 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-a89817d5a7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 12:03:03.938660 kubelet[3294]: I0421 12:03:03.937492 3294 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 12:03:03.938660 kubelet[3294]: I0421 12:03:03.937501 3294 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 12:03:03.939111 kubelet[3294]: I0421 12:03:03.937525 3294 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 12:03:03.939111 kubelet[3294]: I0421 12:03:03.937704 3294 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:03:03.939111 kubelet[3294]: I0421 12:03:03.937858 3294 kubelet.go:475] "Attempting to sync node with API server" Apr 21 12:03:03.939111 kubelet[3294]: I0421 12:03:03.937871 3294 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 12:03:03.939111 kubelet[3294]: I0421 12:03:03.937895 3294 kubelet.go:387] "Adding apiserver pod source" Apr 21 12:03:03.939111 kubelet[3294]: I0421 12:03:03.937908 3294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 12:03:03.942627 kubelet[3294]: I0421 12:03:03.942605 3294 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 12:03:03.943744 kubelet[3294]: I0421 12:03:03.943397 3294 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 12:03:03.944361 kubelet[3294]: I0421 12:03:03.944272 3294 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 12:03:03.952493 kubelet[3294]: I0421 12:03:03.952480 3294 server.go:1262] "Started kubelet" Apr 21 12:03:03.955034 kubelet[3294]: I0421 12:03:03.955019 3294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 12:03:03.965944 kubelet[3294]: I0421 12:03:03.965912 3294 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 12:03:03.967907 kubelet[3294]: I0421 12:03:03.967889 3294 server.go:310] "Adding debug handlers to kubelet server" Apr 21 12:03:03.968797 kubelet[3294]: I0421 12:03:03.968761 3294 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 12:03:03.968961 kubelet[3294]: I0421 12:03:03.968945 3294 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 12:03:03.969284 kubelet[3294]: I0421 12:03:03.969268 3294 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 12:03:03.973801 kubelet[3294]: I0421 12:03:03.973777 3294 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 12:03:03.977565 kubelet[3294]: I0421 12:03:03.977544 3294 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 12:03:03.980947 kubelet[3294]: I0421 12:03:03.980928 3294 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 12:03:03.981184 kubelet[3294]: I0421 12:03:03.981170 3294 reconciler.go:29] "Reconciler: start to sync state" Apr 21 12:03:03.986100 kubelet[3294]: I0421 12:03:03.986077 3294 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 12:03:03.986457 kubelet[3294]: I0421 12:03:03.986427 3294 factory.go:223] Registration of the systemd container factory successfully Apr 21 12:03:03.986562 kubelet[3294]: I0421 12:03:03.986537 3294 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 12:03:03.989601 kubelet[3294]: I0421 12:03:03.989295 3294 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 12:03:03.989601 kubelet[3294]: I0421 12:03:03.989320 3294 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 12:03:03.989601 kubelet[3294]: I0421 12:03:03.989346 3294 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 12:03:03.989601 kubelet[3294]: E0421 12:03:03.989390 3294 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 12:03:03.995570 kubelet[3294]: I0421 12:03:03.995163 3294 factory.go:223] Registration of the containerd container factory successfully Apr 21 12:03:03.997160 kubelet[3294]: E0421 12:03:03.997134 3294 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 12:03:04.041091 kubelet[3294]: I0421 12:03:04.041069 3294 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 12:03:04.041344 kubelet[3294]: I0421 12:03:04.041328 3294 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 12:03:04.041489 kubelet[3294]: I0421 12:03:04.041478 3294 state_mem.go:36] "Initialized new in-memory state store" Apr 21 12:03:04.042002 kubelet[3294]: I0421 12:03:04.041981 3294 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 12:03:04.042128 kubelet[3294]: I0421 12:03:04.042088 3294 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 12:03:04.042187 kubelet[3294]: I0421 12:03:04.042179 3294 policy_none.go:49] "None policy: Start" Apr 21 12:03:04.042988 kubelet[3294]: I0421 12:03:04.042255 3294 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 12:03:04.042988 kubelet[3294]: I0421 12:03:04.042272 3294 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 12:03:04.042988 kubelet[3294]: I0421 12:03:04.042401 3294 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 12:03:04.042988 kubelet[3294]: I0421 12:03:04.042413 3294 policy_none.go:47] "Start" Apr 21 12:03:04.054241 kubelet[3294]: E0421 12:03:04.054215 3294 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 12:03:04.054406 kubelet[3294]: I0421 12:03:04.054386 3294 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 12:03:04.054468 kubelet[3294]: I0421 12:03:04.054404 3294 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 12:03:04.055956 kubelet[3294]: I0421 12:03:04.055091 3294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 12:03:04.058404 kubelet[3294]: E0421 12:03:04.058141 3294 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 12:03:04.090944 kubelet[3294]: I0421 12:03:04.090911 3294 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.091214 kubelet[3294]: I0421 12:03:04.090915 3294 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.091351 kubelet[3294]: I0421 12:03:04.091077 3294 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.104136 kubelet[3294]: I0421 12:03:04.103945 3294 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:03:04.110155 kubelet[3294]: I0421 12:03:04.109739 3294 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:03:04.110155 kubelet[3294]: I0421 12:03:04.109936 3294 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:03:04.165802 kubelet[3294]: I0421 12:03:04.163273 3294 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184013 kubelet[3294]: I0421 12:03:04.183359 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df95b5ed73a9070b7d11ea4510c8d36a-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-a89817d5a7\" (UID: \"df95b5ed73a9070b7d11ea4510c8d36a\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184013 kubelet[3294]: I0421 12:03:04.183401 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df95b5ed73a9070b7d11ea4510c8d36a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-a89817d5a7\" (UID: \"df95b5ed73a9070b7d11ea4510c8d36a\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184013 kubelet[3294]: I0421 12:03:04.183423 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184013 kubelet[3294]: I0421 12:03:04.183449 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184013 kubelet[3294]: I0421 12:03:04.183471 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184313 kubelet[3294]: I0421 12:03:04.183506 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df95b5ed73a9070b7d11ea4510c8d36a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-a89817d5a7\" (UID: \"df95b5ed73a9070b7d11ea4510c8d36a\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184313 kubelet[3294]: I0421 12:03:04.183526 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184313 kubelet[3294]: I0421 12:03:04.183550 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89fd8f600e3b16a8917ec5a6213ef796-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-a89817d5a7\" (UID: \"89fd8f600e3b16a8917ec5a6213ef796\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.184313 kubelet[3294]: I0421 12:03:04.183570 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8dd231f8042790dd37827eda534d57b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-a89817d5a7\" (UID: \"a8dd231f8042790dd37827eda534d57b\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.191507 kubelet[3294]: I0421 12:03:04.190760 3294 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:04.191507 kubelet[3294]: I0421 12:03:04.190848 3294 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:06.708458 kubelet[3294]: I0421 12:03:04.938652 3294 apiserver.go:52] "Watching apiserver" Apr 21 12:03:06.708458 kubelet[3294]: I0421 12:03:04.982028 3294 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 12:03:06.708458 kubelet[3294]: I0421 12:03:05.023489 3294 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:06.708458 kubelet[3294]: I0421 12:03:05.024198 3294 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:06.708458 kubelet[3294]: I0421 12:03:05.043091 3294 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:03:06.708458 kubelet[3294]: E0421 12:03:05.043156 3294 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-a89817d5a7\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:06.708458 kubelet[3294]: I0421 12:03:05.043383 3294 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 21 12:03:06.708458 kubelet[3294]: E0421 12:03:05.043416 3294 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-a89817d5a7\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.7-a-a89817d5a7" Apr 21 12:03:06.708458 kubelet[3294]: I0421 12:03:05.050392 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.7-a-a89817d5a7" podStartSLOduration=1.050372627 podStartE2EDuration="1.050372627s" podCreationTimestamp="2026-04-21 12:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:03:05.050045321 +0000 UTC m=+1.174977807" watchObservedRunningTime="2026-04-21 12:03:05.050372627 +0000 UTC m=+1.175305213" Apr 21 12:03:06.709309 kubelet[3294]: I0421 12:03:05.065067 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-a89817d5a7" podStartSLOduration=1.065046584 podStartE2EDuration="1.065046584s" podCreationTimestamp="2026-04-21 12:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:03:05.064755479 +0000 UTC m=+1.189687965" watchObservedRunningTime="2026-04-21 12:03:05.065046584 +0000 UTC m=+1.189979070" Apr 21 12:03:06.709309 kubelet[3294]: I0421 12:03:05.086405 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.7-a-a89817d5a7" podStartSLOduration=1.086385858 podStartE2EDuration="1.086385858s" podCreationTimestamp="2026-04-21 12:03:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:03:05.085181337 +0000 UTC m=+1.210113923" watchObservedRunningTime="2026-04-21 12:03:05.086385858 +0000 UTC m=+1.211318444" Apr 21 12:03:08.856125 kubelet[3294]: I0421 12:03:08.856079 3294 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 12:03:08.857473 kubelet[3294]: I0421 12:03:08.857017 3294 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 12:03:08.857560 containerd[1737]: time="2026-04-21T12:03:08.856759753Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 12:03:09.870404 systemd[1]: Created slice kubepods-besteffort-pod034b8c23_01d2_41b9_a56e_f6118f64bda5.slice - libcontainer container kubepods-besteffort-pod034b8c23_01d2_41b9_a56e_f6118f64bda5.slice. Apr 21 12:03:09.920294 kubelet[3294]: I0421 12:03:09.920242 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m92r\" (UniqueName: \"kubernetes.io/projected/034b8c23-01d2-41b9-a56e-f6118f64bda5-kube-api-access-4m92r\") pod \"kube-proxy-jv2dk\" (UID: \"034b8c23-01d2-41b9-a56e-f6118f64bda5\") " pod="kube-system/kube-proxy-jv2dk" Apr 21 12:03:09.920294 kubelet[3294]: I0421 12:03:09.920303 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/034b8c23-01d2-41b9-a56e-f6118f64bda5-kube-proxy\") pod \"kube-proxy-jv2dk\" (UID: \"034b8c23-01d2-41b9-a56e-f6118f64bda5\") " pod="kube-system/kube-proxy-jv2dk" Apr 21 12:03:09.920860 kubelet[3294]: I0421 12:03:09.920328 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/034b8c23-01d2-41b9-a56e-f6118f64bda5-xtables-lock\") pod \"kube-proxy-jv2dk\" (UID: \"034b8c23-01d2-41b9-a56e-f6118f64bda5\") " pod="kube-system/kube-proxy-jv2dk" Apr 21 12:03:09.920860 kubelet[3294]: I0421 12:03:09.920344 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/034b8c23-01d2-41b9-a56e-f6118f64bda5-lib-modules\") pod \"kube-proxy-jv2dk\" (UID: \"034b8c23-01d2-41b9-a56e-f6118f64bda5\") " pod="kube-system/kube-proxy-jv2dk" Apr 21 12:03:10.152245 systemd[1]: Created slice kubepods-besteffort-pod6ee2fc87_463d_4584_90c5_40b14ed84a06.slice - libcontainer container kubepods-besteffort-pod6ee2fc87_463d_4584_90c5_40b14ed84a06.slice. Apr 21 12:03:10.190414 containerd[1737]: time="2026-04-21T12:03:10.190366911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv2dk,Uid:034b8c23-01d2-41b9-a56e-f6118f64bda5,Namespace:kube-system,Attempt:0,}" Apr 21 12:03:10.221987 kubelet[3294]: I0421 12:03:10.221947 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqlqq\" (UniqueName: \"kubernetes.io/projected/6ee2fc87-463d-4584-90c5-40b14ed84a06-kube-api-access-vqlqq\") pod \"tigera-operator-5588576f44-mdbtf\" (UID: \"6ee2fc87-463d-4584-90c5-40b14ed84a06\") " pod="tigera-operator/tigera-operator-5588576f44-mdbtf" Apr 21 12:03:10.221987 kubelet[3294]: I0421 12:03:10.221990 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ee2fc87-463d-4584-90c5-40b14ed84a06-var-lib-calico\") pod \"tigera-operator-5588576f44-mdbtf\" (UID: \"6ee2fc87-463d-4584-90c5-40b14ed84a06\") " pod="tigera-operator/tigera-operator-5588576f44-mdbtf" Apr 21 12:03:10.469126 containerd[1737]: time="2026-04-21T12:03:10.468305558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-mdbtf,Uid:6ee2fc87-463d-4584-90c5-40b14ed84a06,Namespace:tigera-operator,Attempt:0,}" Apr 21 12:03:10.475465 containerd[1737]: time="2026-04-21T12:03:10.475300380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:10.475465 containerd[1737]: time="2026-04-21T12:03:10.475427582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:10.475775 containerd[1737]: time="2026-04-21T12:03:10.475475883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:10.475775 containerd[1737]: time="2026-04-21T12:03:10.475576385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:10.501998 systemd[1]: Started cri-containerd-0343c834fb983168e606248e9d9a9e5d7a909ae49388b41db1b7715fc223722d.scope - libcontainer container 0343c834fb983168e606248e9d9a9e5d7a909ae49388b41db1b7715fc223722d. Apr 21 12:03:10.526211 containerd[1737]: time="2026-04-21T12:03:10.526152367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv2dk,Uid:034b8c23-01d2-41b9-a56e-f6118f64bda5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0343c834fb983168e606248e9d9a9e5d7a909ae49388b41db1b7715fc223722d\"" Apr 21 12:03:10.555247 containerd[1737]: time="2026-04-21T12:03:10.555204673Z" level=info msg="CreateContainer within sandbox \"0343c834fb983168e606248e9d9a9e5d7a909ae49388b41db1b7715fc223722d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 12:03:11.026877 containerd[1737]: time="2026-04-21T12:03:11.026640295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:11.026877 containerd[1737]: time="2026-04-21T12:03:11.026695896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:11.026877 containerd[1737]: time="2026-04-21T12:03:11.026711196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:11.026877 containerd[1737]: time="2026-04-21T12:03:11.026805898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:11.047008 systemd[1]: run-containerd-runc-k8s.io-0343c834fb983168e606248e9d9a9e5d7a909ae49388b41db1b7715fc223722d-runc.gkt9Fm.mount: Deactivated successfully. Apr 21 12:03:11.060045 systemd[1]: Started cri-containerd-df3455252d3297de4b1e6e611d763fb6fb194a9934f6e3fd5c94473539b81289.scope - libcontainer container df3455252d3297de4b1e6e611d763fb6fb194a9934f6e3fd5c94473539b81289. Apr 21 12:03:11.064173 containerd[1737]: time="2026-04-21T12:03:11.064113949Z" level=info msg="CreateContainer within sandbox \"0343c834fb983168e606248e9d9a9e5d7a909ae49388b41db1b7715fc223722d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd2a1e1d646d46ea61f95971bdb58e98d1b3e2b38b4b087472fcd105ed08cab5\"" Apr 21 12:03:11.068459 containerd[1737]: time="2026-04-21T12:03:11.067244203Z" level=info msg="StartContainer for \"fd2a1e1d646d46ea61f95971bdb58e98d1b3e2b38b4b087472fcd105ed08cab5\"" Apr 21 12:03:11.122643 systemd[1]: Started cri-containerd-fd2a1e1d646d46ea61f95971bdb58e98d1b3e2b38b4b087472fcd105ed08cab5.scope - libcontainer container fd2a1e1d646d46ea61f95971bdb58e98d1b3e2b38b4b087472fcd105ed08cab5. Apr 21 12:03:11.134298 containerd[1737]: time="2026-04-21T12:03:11.133358456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-mdbtf,Uid:6ee2fc87-463d-4584-90c5-40b14ed84a06,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"df3455252d3297de4b1e6e611d763fb6fb194a9934f6e3fd5c94473539b81289\"" Apr 21 12:03:11.138383 containerd[1737]: time="2026-04-21T12:03:11.138280842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 12:03:11.170375 containerd[1737]: time="2026-04-21T12:03:11.170247900Z" level=info msg="StartContainer for \"fd2a1e1d646d46ea61f95971bdb58e98d1b3e2b38b4b087472fcd105ed08cab5\" returns successfully" Apr 21 12:03:12.054217 kubelet[3294]: I0421 12:03:12.054137 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jv2dk" podStartSLOduration=3.054112214 podStartE2EDuration="3.054112214s" podCreationTimestamp="2026-04-21 12:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:03:12.053962511 +0000 UTC m=+8.178895097" watchObservedRunningTime="2026-04-21 12:03:12.054112214 +0000 UTC m=+8.179044700" Apr 21 12:03:19.048102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652318181.mount: Deactivated successfully. Apr 21 12:03:20.387484 containerd[1737]: time="2026-04-21T12:03:20.387425413Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:20.397862 containerd[1737]: time="2026-04-21T12:03:20.397748085Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 12:03:20.402605 containerd[1737]: time="2026-04-21T12:03:20.402545864Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:20.408330 containerd[1737]: time="2026-04-21T12:03:20.408289660Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:20.409080 containerd[1737]: time="2026-04-21T12:03:20.409041972Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 9.270655428s" Apr 21 12:03:20.409080 containerd[1737]: time="2026-04-21T12:03:20.409075973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 12:03:20.418864 containerd[1737]: time="2026-04-21T12:03:20.417945420Z" level=info msg="CreateContainer within sandbox \"df3455252d3297de4b1e6e611d763fb6fb194a9934f6e3fd5c94473539b81289\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 12:03:20.463206 containerd[1737]: time="2026-04-21T12:03:20.463160071Z" level=info msg="CreateContainer within sandbox \"df3455252d3297de4b1e6e611d763fb6fb194a9934f6e3fd5c94473539b81289\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63b4dc963c67bf9831efb694f05e2dafaa939a79d0485a04426d0c89f81ce3ee\"" Apr 21 12:03:20.464952 containerd[1737]: time="2026-04-21T12:03:20.463718380Z" level=info msg="StartContainer for \"63b4dc963c67bf9831efb694f05e2dafaa939a79d0485a04426d0c89f81ce3ee\"" Apr 21 12:03:20.503026 systemd[1]: Started cri-containerd-63b4dc963c67bf9831efb694f05e2dafaa939a79d0485a04426d0c89f81ce3ee.scope - libcontainer container 63b4dc963c67bf9831efb694f05e2dafaa939a79d0485a04426d0c89f81ce3ee. Apr 21 12:03:20.533855 containerd[1737]: time="2026-04-21T12:03:20.532986330Z" level=info msg="StartContainer for \"63b4dc963c67bf9831efb694f05e2dafaa939a79d0485a04426d0c89f81ce3ee\" returns successfully" Apr 21 12:03:26.986747 sudo[2256]: pam_unix(sudo:session): session closed for user root Apr 21 12:03:27.004149 sshd[2253]: pam_unix(sshd:session): session closed for user core Apr 21 12:03:27.009526 systemd[1]: sshd@6-10.0.0.17:22-20.229.252.112:54732.service: Deactivated successfully. Apr 21 12:03:27.012256 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 12:03:27.012487 systemd[1]: session-9.scope: Consumed 6.011s CPU time, 155.2M memory peak, 0B memory swap peak. Apr 21 12:03:27.014025 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Apr 21 12:03:27.015674 systemd-logind[1709]: Removed session 9. Apr 21 12:03:30.604913 kubelet[3294]: I0421 12:03:30.604766 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-mdbtf" podStartSLOduration=11.331631954 podStartE2EDuration="20.604743925s" podCreationTimestamp="2026-04-21 12:03:10 +0000 UTC" firstStartedPulling="2026-04-21 12:03:11.13702892 +0000 UTC m=+7.261961406" lastFinishedPulling="2026-04-21 12:03:20.410140891 +0000 UTC m=+16.535073377" observedRunningTime="2026-04-21 12:03:21.075269035 +0000 UTC m=+17.200201521" watchObservedRunningTime="2026-04-21 12:03:30.604743925 +0000 UTC m=+26.729676511" Apr 21 12:03:30.626039 systemd[1]: Created slice kubepods-besteffort-podfcc1b7a9_6682_4103_a8ae_287fc34c4266.slice - libcontainer container kubepods-besteffort-podfcc1b7a9_6682_4103_a8ae_287fc34c4266.slice. Apr 21 12:03:30.667042 kubelet[3294]: I0421 12:03:30.666997 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fcc1b7a9-6682-4103-a8ae-287fc34c4266-typha-certs\") pod \"calico-typha-d48689447-4f9sc\" (UID: \"fcc1b7a9-6682-4103-a8ae-287fc34c4266\") " pod="calico-system/calico-typha-d48689447-4f9sc" Apr 21 12:03:30.667299 kubelet[3294]: I0421 12:03:30.667056 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc1b7a9-6682-4103-a8ae-287fc34c4266-tigera-ca-bundle\") pod \"calico-typha-d48689447-4f9sc\" (UID: \"fcc1b7a9-6682-4103-a8ae-287fc34c4266\") " pod="calico-system/calico-typha-d48689447-4f9sc" Apr 21 12:03:30.667299 kubelet[3294]: I0421 12:03:30.667083 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv5g5\" (UniqueName: \"kubernetes.io/projected/fcc1b7a9-6682-4103-a8ae-287fc34c4266-kube-api-access-vv5g5\") pod \"calico-typha-d48689447-4f9sc\" (UID: \"fcc1b7a9-6682-4103-a8ae-287fc34c4266\") " pod="calico-system/calico-typha-d48689447-4f9sc" Apr 21 12:03:30.743383 systemd[1]: Created slice kubepods-besteffort-podf2ac8cb7_cde4_411b_ac65_2a9a0afb968e.slice - libcontainer container kubepods-besteffort-podf2ac8cb7_cde4_411b_ac65_2a9a0afb968e.slice. Apr 21 12:03:30.768140 kubelet[3294]: I0421 12:03:30.768092 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-sys-fs\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.768140 kubelet[3294]: I0421 12:03:30.768134 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-var-lib-calico\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769069 kubelet[3294]: I0421 12:03:30.768156 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-nodeproc\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769069 kubelet[3294]: I0421 12:03:30.768175 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-cni-log-dir\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769069 kubelet[3294]: I0421 12:03:30.768196 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-flexvol-driver-host\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769069 kubelet[3294]: I0421 12:03:30.768213 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-policysync\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769069 kubelet[3294]: I0421 12:03:30.768249 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-tigera-ca-bundle\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769432 kubelet[3294]: I0421 12:03:30.768290 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-cni-bin-dir\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769432 kubelet[3294]: I0421 12:03:30.768314 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-var-run-calico\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769432 kubelet[3294]: I0421 12:03:30.768336 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-bpffs\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769432 kubelet[3294]: I0421 12:03:30.768366 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-cni-net-dir\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769432 kubelet[3294]: I0421 12:03:30.768390 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-lib-modules\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769692 kubelet[3294]: I0421 12:03:30.768411 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwl8\" (UniqueName: \"kubernetes.io/projected/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-kube-api-access-nwwl8\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769692 kubelet[3294]: I0421 12:03:30.768445 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-node-certs\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.769692 kubelet[3294]: I0421 12:03:30.768464 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2ac8cb7-cde4-411b-ac65-2a9a0afb968e-xtables-lock\") pod \"calico-node-s6b6h\" (UID: \"f2ac8cb7-cde4-411b-ac65-2a9a0afb968e\") " pod="calico-system/calico-node-s6b6h" Apr 21 12:03:30.837383 kubelet[3294]: E0421 12:03:30.837336 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:30.869435 kubelet[3294]: I0421 12:03:30.869308 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20e52a3b-9a02-45ad-96fe-fb214b6cbb05-kubelet-dir\") pod \"csi-node-driver-mqkhk\" (UID: \"20e52a3b-9a02-45ad-96fe-fb214b6cbb05\") " pod="calico-system/csi-node-driver-mqkhk" Apr 21 12:03:30.869582 kubelet[3294]: I0421 12:03:30.869438 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/20e52a3b-9a02-45ad-96fe-fb214b6cbb05-varrun\") pod \"csi-node-driver-mqkhk\" (UID: \"20e52a3b-9a02-45ad-96fe-fb214b6cbb05\") " pod="calico-system/csi-node-driver-mqkhk" Apr 21 12:03:30.869582 kubelet[3294]: I0421 12:03:30.869479 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/20e52a3b-9a02-45ad-96fe-fb214b6cbb05-registration-dir\") pod \"csi-node-driver-mqkhk\" (UID: \"20e52a3b-9a02-45ad-96fe-fb214b6cbb05\") " pod="calico-system/csi-node-driver-mqkhk" Apr 21 12:03:30.869582 kubelet[3294]: I0421 12:03:30.869540 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/20e52a3b-9a02-45ad-96fe-fb214b6cbb05-socket-dir\") pod \"csi-node-driver-mqkhk\" (UID: \"20e52a3b-9a02-45ad-96fe-fb214b6cbb05\") " pod="calico-system/csi-node-driver-mqkhk" Apr 21 12:03:30.869582 kubelet[3294]: I0421 12:03:30.869563 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49x2w\" (UniqueName: \"kubernetes.io/projected/20e52a3b-9a02-45ad-96fe-fb214b6cbb05-kube-api-access-49x2w\") pod \"csi-node-driver-mqkhk\" (UID: \"20e52a3b-9a02-45ad-96fe-fb214b6cbb05\") " pod="calico-system/csi-node-driver-mqkhk" Apr 21 12:03:30.878261 kubelet[3294]: E0421 12:03:30.878231 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.879852 kubelet[3294]: W0421 12:03:30.879431 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.879852 kubelet[3294]: E0421 12:03:30.879471 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.881706 kubelet[3294]: E0421 12:03:30.881463 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.881706 kubelet[3294]: W0421 12:03:30.881480 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.881706 kubelet[3294]: E0421 12:03:30.881497 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.885562 kubelet[3294]: E0421 12:03:30.882629 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.885562 kubelet[3294]: W0421 12:03:30.882653 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.885562 kubelet[3294]: E0421 12:03:30.882668 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.886058 kubelet[3294]: E0421 12:03:30.885954 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.886417 kubelet[3294]: W0421 12:03:30.886233 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.886417 kubelet[3294]: E0421 12:03:30.886256 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.888449 kubelet[3294]: E0421 12:03:30.888151 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.888449 kubelet[3294]: W0421 12:03:30.888168 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.888449 kubelet[3294]: E0421 12:03:30.888186 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.890859 kubelet[3294]: E0421 12:03:30.890606 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.892954 kubelet[3294]: W0421 12:03:30.892937 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.893042 kubelet[3294]: E0421 12:03:30.893030 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.894017 kubelet[3294]: E0421 12:03:30.894001 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.894970 kubelet[3294]: W0421 12:03:30.894950 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.895944 kubelet[3294]: E0421 12:03:30.895925 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.898100 kubelet[3294]: E0421 12:03:30.897927 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.898100 kubelet[3294]: W0421 12:03:30.897942 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.898100 kubelet[3294]: E0421 12:03:30.897956 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.899109 kubelet[3294]: E0421 12:03:30.899084 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.899109 kubelet[3294]: W0421 12:03:30.899103 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.899235 kubelet[3294]: E0421 12:03:30.899118 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.900567 kubelet[3294]: E0421 12:03:30.900541 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.900567 kubelet[3294]: W0421 12:03:30.900561 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.900690 kubelet[3294]: E0421 12:03:30.900577 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.903104 kubelet[3294]: E0421 12:03:30.903076 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.903104 kubelet[3294]: W0421 12:03:30.903095 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.903232 kubelet[3294]: E0421 12:03:30.903109 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.903383 kubelet[3294]: E0421 12:03:30.903369 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.903383 kubelet[3294]: W0421 12:03:30.903383 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.903487 kubelet[3294]: E0421 12:03:30.903396 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.903871 kubelet[3294]: E0421 12:03:30.903632 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.903871 kubelet[3294]: W0421 12:03:30.903646 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.903871 kubelet[3294]: E0421 12:03:30.903658 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.904204 kubelet[3294]: E0421 12:03:30.904117 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.904204 kubelet[3294]: W0421 12:03:30.904132 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.904204 kubelet[3294]: E0421 12:03:30.904146 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.906042 kubelet[3294]: E0421 12:03:30.905811 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.906042 kubelet[3294]: W0421 12:03:30.905998 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.906042 kubelet[3294]: E0421 12:03:30.906018 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.906970 kubelet[3294]: E0421 12:03:30.906672 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.906970 kubelet[3294]: W0421 12:03:30.906690 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.906970 kubelet[3294]: E0421 12:03:30.906710 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.907502 kubelet[3294]: E0421 12:03:30.907486 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.907662 kubelet[3294]: W0421 12:03:30.907594 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.907662 kubelet[3294]: E0421 12:03:30.907614 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.908394 kubelet[3294]: E0421 12:03:30.908303 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.908394 kubelet[3294]: W0421 12:03:30.908318 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.908394 kubelet[3294]: E0421 12:03:30.908332 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.909435 kubelet[3294]: E0421 12:03:30.909216 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.909435 kubelet[3294]: W0421 12:03:30.909231 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.909435 kubelet[3294]: E0421 12:03:30.909246 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.909851 kubelet[3294]: E0421 12:03:30.909781 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.909851 kubelet[3294]: W0421 12:03:30.909794 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.909851 kubelet[3294]: E0421 12:03:30.909810 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.910606 kubelet[3294]: E0421 12:03:30.910224 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.910606 kubelet[3294]: W0421 12:03:30.910238 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.910606 kubelet[3294]: E0421 12:03:30.910251 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.911065 kubelet[3294]: E0421 12:03:30.910874 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.911065 kubelet[3294]: W0421 12:03:30.910889 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.911065 kubelet[3294]: E0421 12:03:30.910903 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.911333 kubelet[3294]: E0421 12:03:30.911291 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.911333 kubelet[3294]: W0421 12:03:30.911306 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.911333 kubelet[3294]: E0421 12:03:30.911320 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.912144 kubelet[3294]: E0421 12:03:30.911935 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.912144 kubelet[3294]: W0421 12:03:30.911952 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.912144 kubelet[3294]: E0421 12:03:30.912025 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.912934 kubelet[3294]: E0421 12:03:30.912748 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.912934 kubelet[3294]: W0421 12:03:30.912764 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.912934 kubelet[3294]: E0421 12:03:30.912778 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.913791 kubelet[3294]: E0421 12:03:30.913610 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.913791 kubelet[3294]: W0421 12:03:30.913627 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.913791 kubelet[3294]: E0421 12:03:30.913641 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.914765 kubelet[3294]: E0421 12:03:30.914405 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.914765 kubelet[3294]: W0421 12:03:30.914426 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.914765 kubelet[3294]: E0421 12:03:30.914482 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.915174 kubelet[3294]: E0421 12:03:30.915108 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.915174 kubelet[3294]: W0421 12:03:30.915123 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.915174 kubelet[3294]: E0421 12:03:30.915138 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.918337 kubelet[3294]: E0421 12:03:30.917976 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.918337 kubelet[3294]: W0421 12:03:30.917993 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.918337 kubelet[3294]: E0421 12:03:30.918009 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.946983 containerd[1737]: time="2026-04-21T12:03:30.946853188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d48689447-4f9sc,Uid:fcc1b7a9-6682-4103-a8ae-287fc34c4266,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:30.970619 kubelet[3294]: E0421 12:03:30.970584 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.970619 kubelet[3294]: W0421 12:03:30.970610 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.971015 kubelet[3294]: E0421 12:03:30.970652 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.971360 kubelet[3294]: E0421 12:03:30.971335 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.971444 kubelet[3294]: W0421 12:03:30.971369 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.971444 kubelet[3294]: E0421 12:03:30.971389 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.971750 kubelet[3294]: E0421 12:03:30.971732 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.971840 kubelet[3294]: W0421 12:03:30.971751 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.971840 kubelet[3294]: E0421 12:03:30.971766 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.972158 kubelet[3294]: E0421 12:03:30.972083 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.972158 kubelet[3294]: W0421 12:03:30.972099 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.972158 kubelet[3294]: E0421 12:03:30.972110 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.972443 kubelet[3294]: E0421 12:03:30.972425 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.972443 kubelet[3294]: W0421 12:03:30.972441 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.972545 kubelet[3294]: E0421 12:03:30.972454 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.972736 kubelet[3294]: E0421 12:03:30.972717 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.972736 kubelet[3294]: W0421 12:03:30.972733 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.972869 kubelet[3294]: E0421 12:03:30.972747 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.973084 kubelet[3294]: E0421 12:03:30.973065 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.973084 kubelet[3294]: W0421 12:03:30.973082 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.973309 kubelet[3294]: E0421 12:03:30.973096 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.973370 kubelet[3294]: E0421 12:03:30.973327 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.973370 kubelet[3294]: W0421 12:03:30.973339 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.973370 kubelet[3294]: E0421 12:03:30.973352 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.973629 kubelet[3294]: E0421 12:03:30.973606 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.973629 kubelet[3294]: W0421 12:03:30.973628 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.973746 kubelet[3294]: E0421 12:03:30.973642 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.973979 kubelet[3294]: E0421 12:03:30.973963 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.973979 kubelet[3294]: W0421 12:03:30.973978 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.974152 kubelet[3294]: E0421 12:03:30.973992 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.974235 kubelet[3294]: E0421 12:03:30.974213 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.974235 kubelet[3294]: W0421 12:03:30.974224 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.974342 kubelet[3294]: E0421 12:03:30.974238 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.974690 kubelet[3294]: E0421 12:03:30.974672 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.974690 kubelet[3294]: W0421 12:03:30.974687 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.975188 kubelet[3294]: E0421 12:03:30.974700 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.975188 kubelet[3294]: E0421 12:03:30.974962 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.975188 kubelet[3294]: W0421 12:03:30.974981 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.975188 kubelet[3294]: E0421 12:03:30.974992 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.975414 kubelet[3294]: E0421 12:03:30.975401 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.975505 kubelet[3294]: W0421 12:03:30.975459 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.975505 kubelet[3294]: E0421 12:03:30.975479 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.975748 kubelet[3294]: E0421 12:03:30.975718 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.975748 kubelet[3294]: W0421 12:03:30.975730 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.975748 kubelet[3294]: E0421 12:03:30.975745 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.976321 kubelet[3294]: E0421 12:03:30.976035 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.976321 kubelet[3294]: W0421 12:03:30.976046 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.976321 kubelet[3294]: E0421 12:03:30.976059 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.976592 kubelet[3294]: E0421 12:03:30.976553 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.976592 kubelet[3294]: W0421 12:03:30.976570 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.976592 kubelet[3294]: E0421 12:03:30.976584 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.977211 kubelet[3294]: E0421 12:03:30.976808 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.977211 kubelet[3294]: W0421 12:03:30.976820 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.977211 kubelet[3294]: E0421 12:03:30.976885 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.977211 kubelet[3294]: E0421 12:03:30.977127 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.977211 kubelet[3294]: W0421 12:03:30.977139 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.977211 kubelet[3294]: E0421 12:03:30.977151 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.978113 kubelet[3294]: E0421 12:03:30.977361 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.978113 kubelet[3294]: W0421 12:03:30.977372 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.978113 kubelet[3294]: E0421 12:03:30.977385 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.978113 kubelet[3294]: E0421 12:03:30.977654 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.978113 kubelet[3294]: W0421 12:03:30.977666 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.978113 kubelet[3294]: E0421 12:03:30.977679 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.978977 kubelet[3294]: E0421 12:03:30.978176 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.978977 kubelet[3294]: W0421 12:03:30.978189 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.978977 kubelet[3294]: E0421 12:03:30.978202 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.978977 kubelet[3294]: E0421 12:03:30.978512 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.978977 kubelet[3294]: W0421 12:03:30.978524 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.978977 kubelet[3294]: E0421 12:03:30.978537 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.978977 kubelet[3294]: E0421 12:03:30.978809 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.978977 kubelet[3294]: W0421 12:03:30.978820 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.978977 kubelet[3294]: E0421 12:03:30.978852 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.979405 kubelet[3294]: E0421 12:03:30.979122 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.979405 kubelet[3294]: W0421 12:03:30.979133 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.979405 kubelet[3294]: E0421 12:03:30.979145 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:30.996339 kubelet[3294]: E0421 12:03:30.996301 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:30.996339 kubelet[3294]: W0421 12:03:30.996327 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:30.996493 kubelet[3294]: E0421 12:03:30.996351 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:31.005339 containerd[1737]: time="2026-04-21T12:03:31.005227889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:31.005339 containerd[1737]: time="2026-04-21T12:03:31.005291190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:31.005599 containerd[1737]: time="2026-04-21T12:03:31.005319290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:31.005599 containerd[1737]: time="2026-04-21T12:03:31.005435292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:31.023001 systemd[1]: Started cri-containerd-ebf362fededf76b766e47345ce2dc7eb0ff7beee076b4b529d166dd30c458ab3.scope - libcontainer container ebf362fededf76b766e47345ce2dc7eb0ff7beee076b4b529d166dd30c458ab3. Apr 21 12:03:31.061410 containerd[1737]: time="2026-04-21T12:03:31.061192048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s6b6h,Uid:f2ac8cb7-cde4-411b-ac65-2a9a0afb968e,Namespace:calico-system,Attempt:0,}" Apr 21 12:03:31.071269 containerd[1737]: time="2026-04-21T12:03:31.071207820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d48689447-4f9sc,Uid:fcc1b7a9-6682-4103-a8ae-287fc34c4266,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebf362fededf76b766e47345ce2dc7eb0ff7beee076b4b529d166dd30c458ab3\"" Apr 21 12:03:31.073292 containerd[1737]: time="2026-04-21T12:03:31.073256355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 12:03:31.119276 containerd[1737]: time="2026-04-21T12:03:31.119125841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:03:31.121428 containerd[1737]: time="2026-04-21T12:03:31.119310244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:03:31.121428 containerd[1737]: time="2026-04-21T12:03:31.119338044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:31.121428 containerd[1737]: time="2026-04-21T12:03:31.119432646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:03:31.152114 systemd[1]: Started cri-containerd-d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191.scope - libcontainer container d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191. Apr 21 12:03:31.176874 containerd[1737]: time="2026-04-21T12:03:31.176595726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s6b6h,Uid:f2ac8cb7-cde4-411b-ac65-2a9a0afb968e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\"" Apr 21 12:03:32.470516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157308071.mount: Deactivated successfully. Apr 21 12:03:32.990931 kubelet[3294]: E0421 12:03:32.990094 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:33.486533 containerd[1737]: time="2026-04-21T12:03:33.486471253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:33.490390 containerd[1737]: time="2026-04-21T12:03:33.490335819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 12:03:33.494038 containerd[1737]: time="2026-04-21T12:03:33.493983181Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:33.499493 containerd[1737]: time="2026-04-21T12:03:33.499442174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:33.500709 containerd[1737]: time="2026-04-21T12:03:33.500169286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.426866131s" Apr 21 12:03:33.500709 containerd[1737]: time="2026-04-21T12:03:33.500211287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 12:03:33.502298 containerd[1737]: time="2026-04-21T12:03:33.502083219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 12:03:33.527169 containerd[1737]: time="2026-04-21T12:03:33.527123345Z" level=info msg="CreateContainer within sandbox \"ebf362fededf76b766e47345ce2dc7eb0ff7beee076b4b529d166dd30c458ab3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 12:03:33.576157 containerd[1737]: time="2026-04-21T12:03:33.576104977Z" level=info msg="CreateContainer within sandbox \"ebf362fededf76b766e47345ce2dc7eb0ff7beee076b4b529d166dd30c458ab3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"18b5276359c5bc772ed836f86562592ea45cf2fd54cd01de34f889e1578547f6\"" Apr 21 12:03:33.576875 containerd[1737]: time="2026-04-21T12:03:33.576843190Z" level=info msg="StartContainer for \"18b5276359c5bc772ed836f86562592ea45cf2fd54cd01de34f889e1578547f6\"" Apr 21 12:03:33.609000 systemd[1]: Started cri-containerd-18b5276359c5bc772ed836f86562592ea45cf2fd54cd01de34f889e1578547f6.scope - libcontainer container 18b5276359c5bc772ed836f86562592ea45cf2fd54cd01de34f889e1578547f6. Apr 21 12:03:33.660657 containerd[1737]: time="2026-04-21T12:03:33.659882802Z" level=info msg="StartContainer for \"18b5276359c5bc772ed836f86562592ea45cf2fd54cd01de34f889e1578547f6\" returns successfully" Apr 21 12:03:34.169850 kubelet[3294]: E0421 12:03:34.169807 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.169850 kubelet[3294]: W0421 12:03:34.169842 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.170632 kubelet[3294]: E0421 12:03:34.169871 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.170632 kubelet[3294]: E0421 12:03:34.170113 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.170632 kubelet[3294]: W0421 12:03:34.170125 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.170632 kubelet[3294]: E0421 12:03:34.170140 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.170632 kubelet[3294]: E0421 12:03:34.170352 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.170632 kubelet[3294]: W0421 12:03:34.170363 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.170632 kubelet[3294]: E0421 12:03:34.170375 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.170632 kubelet[3294]: E0421 12:03:34.170608 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.170632 kubelet[3294]: W0421 12:03:34.170619 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.170632 kubelet[3294]: E0421 12:03:34.170631 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.171286 kubelet[3294]: E0421 12:03:34.170852 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.171286 kubelet[3294]: W0421 12:03:34.170865 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.171286 kubelet[3294]: E0421 12:03:34.170878 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.171286 kubelet[3294]: E0421 12:03:34.171068 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.171286 kubelet[3294]: W0421 12:03:34.171078 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.171286 kubelet[3294]: E0421 12:03:34.171090 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.171580 kubelet[3294]: E0421 12:03:34.171301 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.171580 kubelet[3294]: W0421 12:03:34.171311 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.171580 kubelet[3294]: E0421 12:03:34.171325 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.171580 kubelet[3294]: E0421 12:03:34.171525 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.171580 kubelet[3294]: W0421 12:03:34.171553 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.171580 kubelet[3294]: E0421 12:03:34.171568 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.171915 kubelet[3294]: E0421 12:03:34.171776 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.171915 kubelet[3294]: W0421 12:03:34.171786 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.171915 kubelet[3294]: E0421 12:03:34.171797 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.172089 kubelet[3294]: E0421 12:03:34.171998 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.172089 kubelet[3294]: W0421 12:03:34.172008 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.172089 kubelet[3294]: E0421 12:03:34.172020 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.172304 kubelet[3294]: E0421 12:03:34.172286 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.172304 kubelet[3294]: W0421 12:03:34.172301 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.172410 kubelet[3294]: E0421 12:03:34.172313 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.172549 kubelet[3294]: E0421 12:03:34.172532 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.172549 kubelet[3294]: W0421 12:03:34.172547 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.172655 kubelet[3294]: E0421 12:03:34.172560 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.172837 kubelet[3294]: E0421 12:03:34.172808 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.172922 kubelet[3294]: W0421 12:03:34.172822 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.172922 kubelet[3294]: E0421 12:03:34.172854 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.173116 kubelet[3294]: E0421 12:03:34.173061 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.173116 kubelet[3294]: W0421 12:03:34.173071 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.173116 kubelet[3294]: E0421 12:03:34.173083 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.173328 kubelet[3294]: E0421 12:03:34.173268 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.173328 kubelet[3294]: W0421 12:03:34.173278 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.173328 kubelet[3294]: E0421 12:03:34.173289 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.197749 kubelet[3294]: E0421 12:03:34.197717 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.197749 kubelet[3294]: W0421 12:03:34.197740 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.197980 kubelet[3294]: E0421 12:03:34.197763 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.198136 kubelet[3294]: E0421 12:03:34.198113 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.198136 kubelet[3294]: W0421 12:03:34.198130 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.198271 kubelet[3294]: E0421 12:03:34.198146 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.198438 kubelet[3294]: E0421 12:03:34.198418 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.198438 kubelet[3294]: W0421 12:03:34.198434 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.198572 kubelet[3294]: E0421 12:03:34.198448 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.198942 kubelet[3294]: E0421 12:03:34.198922 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.198942 kubelet[3294]: W0421 12:03:34.198938 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.199184 kubelet[3294]: E0421 12:03:34.198953 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.199354 kubelet[3294]: E0421 12:03:34.199338 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.199354 kubelet[3294]: W0421 12:03:34.199352 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.199461 kubelet[3294]: E0421 12:03:34.199365 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.199770 kubelet[3294]: E0421 12:03:34.199751 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.199770 kubelet[3294]: W0421 12:03:34.199766 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.200057 kubelet[3294]: E0421 12:03:34.199781 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.200057 kubelet[3294]: E0421 12:03:34.200024 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.200057 kubelet[3294]: W0421 12:03:34.200037 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.200057 kubelet[3294]: E0421 12:03:34.200051 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.200530 kubelet[3294]: E0421 12:03:34.200255 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.200530 kubelet[3294]: W0421 12:03:34.200266 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.200530 kubelet[3294]: E0421 12:03:34.200278 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.200660 kubelet[3294]: E0421 12:03:34.200534 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.200660 kubelet[3294]: W0421 12:03:34.200545 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.200660 kubelet[3294]: E0421 12:03:34.200557 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.201064 kubelet[3294]: E0421 12:03:34.201046 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.201064 kubelet[3294]: W0421 12:03:34.201061 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.201192 kubelet[3294]: E0421 12:03:34.201075 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.201370 kubelet[3294]: E0421 12:03:34.201359 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.201417 kubelet[3294]: W0421 12:03:34.201371 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.201417 kubelet[3294]: E0421 12:03:34.201384 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.201673 kubelet[3294]: E0421 12:03:34.201656 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.201740 kubelet[3294]: W0421 12:03:34.201678 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.201740 kubelet[3294]: E0421 12:03:34.201692 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.202269 kubelet[3294]: E0421 12:03:34.202251 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.202269 kubelet[3294]: W0421 12:03:34.202267 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.202485 kubelet[3294]: E0421 12:03:34.202283 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.202557 kubelet[3294]: E0421 12:03:34.202486 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.202557 kubelet[3294]: W0421 12:03:34.202497 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.202557 kubelet[3294]: E0421 12:03:34.202509 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.202855 kubelet[3294]: E0421 12:03:34.202782 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.202855 kubelet[3294]: W0421 12:03:34.202794 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.202855 kubelet[3294]: E0421 12:03:34.202807 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.203379 kubelet[3294]: E0421 12:03:34.203360 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.203379 kubelet[3294]: W0421 12:03:34.203376 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.203595 kubelet[3294]: E0421 12:03:34.203390 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.203649 kubelet[3294]: E0421 12:03:34.203615 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.203649 kubelet[3294]: W0421 12:03:34.203626 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.203649 kubelet[3294]: E0421 12:03:34.203638 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.204089 kubelet[3294]: E0421 12:03:34.204072 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:34.204089 kubelet[3294]: W0421 12:03:34.204087 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:34.204311 kubelet[3294]: E0421 12:03:34.204100 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:34.990765 kubelet[3294]: E0421 12:03:34.990708 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:35.093872 kubelet[3294]: I0421 12:03:35.093818 3294 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:03:35.115240 containerd[1737]: time="2026-04-21T12:03:35.115188046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:35.118767 containerd[1737]: time="2026-04-21T12:03:35.118617904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 12:03:35.122760 containerd[1737]: time="2026-04-21T12:03:35.122661073Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:35.127365 containerd[1737]: time="2026-04-21T12:03:35.127312752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:35.128530 containerd[1737]: time="2026-04-21T12:03:35.128047564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.625932345s" Apr 21 12:03:35.128530 containerd[1737]: time="2026-04-21T12:03:35.128090765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 12:03:35.138244 containerd[1737]: time="2026-04-21T12:03:35.138201437Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 12:03:35.179539 kubelet[3294]: E0421 12:03:35.179502 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.179539 kubelet[3294]: W0421 12:03:35.179527 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.180447 kubelet[3294]: E0421 12:03:35.179552 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.180447 kubelet[3294]: E0421 12:03:35.180098 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.180447 kubelet[3294]: W0421 12:03:35.180114 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.180447 kubelet[3294]: E0421 12:03:35.180129 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.180447 kubelet[3294]: E0421 12:03:35.180373 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.180447 kubelet[3294]: W0421 12:03:35.180385 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.180447 kubelet[3294]: E0421 12:03:35.180398 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.181301 kubelet[3294]: E0421 12:03:35.180595 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.181301 kubelet[3294]: W0421 12:03:35.180624 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.181301 kubelet[3294]: E0421 12:03:35.180636 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.181301 kubelet[3294]: E0421 12:03:35.180875 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.181301 kubelet[3294]: W0421 12:03:35.180886 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.181301 kubelet[3294]: E0421 12:03:35.180898 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.181301 kubelet[3294]: E0421 12:03:35.181096 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.181301 kubelet[3294]: W0421 12:03:35.181108 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.181301 kubelet[3294]: E0421 12:03:35.181121 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.182191 kubelet[3294]: E0421 12:03:35.181324 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.182191 kubelet[3294]: W0421 12:03:35.181335 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.182191 kubelet[3294]: E0421 12:03:35.181347 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.182191 kubelet[3294]: E0421 12:03:35.181543 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.182191 kubelet[3294]: W0421 12:03:35.181553 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.182191 kubelet[3294]: E0421 12:03:35.181565 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.182191 kubelet[3294]: E0421 12:03:35.182049 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.182191 kubelet[3294]: W0421 12:03:35.182062 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.182191 kubelet[3294]: E0421 12:03:35.182075 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.182854 kubelet[3294]: E0421 12:03:35.182309 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.182854 kubelet[3294]: W0421 12:03:35.182320 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.182854 kubelet[3294]: E0421 12:03:35.182333 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.182854 kubelet[3294]: E0421 12:03:35.182528 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.182854 kubelet[3294]: W0421 12:03:35.182539 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.182854 kubelet[3294]: E0421 12:03:35.182551 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.182854 kubelet[3294]: E0421 12:03:35.182775 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.182854 kubelet[3294]: W0421 12:03:35.182786 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.182854 kubelet[3294]: E0421 12:03:35.182797 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.183416 kubelet[3294]: E0421 12:03:35.183016 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.183416 kubelet[3294]: W0421 12:03:35.183027 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.183416 kubelet[3294]: E0421 12:03:35.183039 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.183416 kubelet[3294]: E0421 12:03:35.183263 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.183416 kubelet[3294]: W0421 12:03:35.183275 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.183416 kubelet[3294]: E0421 12:03:35.183286 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.183896 kubelet[3294]: E0421 12:03:35.183476 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.183896 kubelet[3294]: W0421 12:03:35.183486 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.183896 kubelet[3294]: E0421 12:03:35.183497 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.193868 containerd[1737]: time="2026-04-21T12:03:35.193813882Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a\"" Apr 21 12:03:35.196536 containerd[1737]: time="2026-04-21T12:03:35.196407327Z" level=info msg="StartContainer for \"782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a\"" Apr 21 12:03:35.204675 kubelet[3294]: E0421 12:03:35.204647 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.204675 kubelet[3294]: W0421 12:03:35.204668 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.204882 kubelet[3294]: E0421 12:03:35.204692 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.205155 kubelet[3294]: E0421 12:03:35.205138 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.205340 kubelet[3294]: W0421 12:03:35.205260 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.205340 kubelet[3294]: E0421 12:03:35.205282 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.205641 kubelet[3294]: E0421 12:03:35.205611 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.205641 kubelet[3294]: W0421 12:03:35.205638 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.206097 kubelet[3294]: E0421 12:03:35.206027 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.206416 kubelet[3294]: E0421 12:03:35.206377 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.206416 kubelet[3294]: W0421 12:03:35.206404 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.206685 kubelet[3294]: E0421 12:03:35.206417 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.206750 kubelet[3294]: E0421 12:03:35.206689 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.206750 kubelet[3294]: W0421 12:03:35.206701 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.206750 kubelet[3294]: E0421 12:03:35.206714 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.207150 kubelet[3294]: E0421 12:03:35.207132 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.207150 kubelet[3294]: W0421 12:03:35.207147 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.207276 kubelet[3294]: E0421 12:03:35.207161 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.207849 kubelet[3294]: E0421 12:03:35.207811 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.207849 kubelet[3294]: W0421 12:03:35.207845 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.208185 kubelet[3294]: E0421 12:03:35.207860 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.208185 kubelet[3294]: E0421 12:03:35.208123 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.208185 kubelet[3294]: W0421 12:03:35.208135 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.208487 kubelet[3294]: E0421 12:03:35.208148 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.209101 kubelet[3294]: E0421 12:03:35.208976 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.209101 kubelet[3294]: W0421 12:03:35.208990 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.209101 kubelet[3294]: E0421 12:03:35.209004 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.209590 kubelet[3294]: E0421 12:03:35.209484 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.209590 kubelet[3294]: W0421 12:03:35.209512 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.209590 kubelet[3294]: E0421 12:03:35.209527 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.211271 kubelet[3294]: E0421 12:03:35.211119 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.211271 kubelet[3294]: W0421 12:03:35.211135 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.211271 kubelet[3294]: E0421 12:03:35.211149 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.213846 kubelet[3294]: E0421 12:03:35.211978 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.213846 kubelet[3294]: W0421 12:03:35.211994 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.213846 kubelet[3294]: E0421 12:03:35.212007 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.214268 kubelet[3294]: E0421 12:03:35.214120 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.214268 kubelet[3294]: W0421 12:03:35.214154 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.214268 kubelet[3294]: E0421 12:03:35.214168 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.216438 kubelet[3294]: E0421 12:03:35.214684 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.216438 kubelet[3294]: W0421 12:03:35.214704 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.216438 kubelet[3294]: E0421 12:03:35.214722 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.216812 kubelet[3294]: E0421 12:03:35.216798 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.218048 kubelet[3294]: W0421 12:03:35.217944 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.218048 kubelet[3294]: E0421 12:03:35.217967 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.219657 kubelet[3294]: E0421 12:03:35.219519 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.219657 kubelet[3294]: W0421 12:03:35.219535 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.219657 kubelet[3294]: E0421 12:03:35.219548 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.220379 kubelet[3294]: E0421 12:03:35.219982 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.220379 kubelet[3294]: W0421 12:03:35.219996 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.220379 kubelet[3294]: E0421 12:03:35.220010 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.220801 kubelet[3294]: E0421 12:03:35.220784 3294 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 12:03:35.220898 kubelet[3294]: W0421 12:03:35.220886 3294 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 12:03:35.220985 kubelet[3294]: E0421 12:03:35.220974 3294 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 12:03:35.242037 systemd[1]: Started cri-containerd-782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a.scope - libcontainer container 782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a. Apr 21 12:03:35.275954 containerd[1737]: time="2026-04-21T12:03:35.275177166Z" level=info msg="StartContainer for \"782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a\" returns successfully" Apr 21 12:03:35.284494 systemd[1]: cri-containerd-782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a.scope: Deactivated successfully. Apr 21 12:03:35.507550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a-rootfs.mount: Deactivated successfully. Apr 21 12:03:36.117210 kubelet[3294]: I0421 12:03:36.115463 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d48689447-4f9sc" podStartSLOduration=3.6871186959999998 podStartE2EDuration="6.115444052s" podCreationTimestamp="2026-04-21 12:03:30 +0000 UTC" firstStartedPulling="2026-04-21 12:03:31.072926349 +0000 UTC m=+27.197858935" lastFinishedPulling="2026-04-21 12:03:33.501251705 +0000 UTC m=+29.626184291" observedRunningTime="2026-04-21 12:03:34.10737391 +0000 UTC m=+30.232306496" watchObservedRunningTime="2026-04-21 12:03:36.115444052 +0000 UTC m=+32.240376538" Apr 21 12:03:36.989988 kubelet[3294]: E0421 12:03:36.989914 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:37.225573 containerd[1737]: time="2026-04-21T12:03:37.225490926Z" level=info msg="shim disconnected" id=782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a namespace=k8s.io Apr 21 12:03:37.225573 containerd[1737]: time="2026-04-21T12:03:37.225565227Z" level=warning msg="cleaning up after shim disconnected" id=782744feaa20bd0f8063d6e379d301a1008d8766b0a24c4d142c750b819a236a namespace=k8s.io Apr 21 12:03:37.225573 containerd[1737]: time="2026-04-21T12:03:37.225576827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 12:03:37.709327 kubelet[3294]: I0421 12:03:37.708943 3294 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 12:03:38.102924 containerd[1737]: time="2026-04-21T12:03:38.102651240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 12:03:38.990868 kubelet[3294]: E0421 12:03:38.990527 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:40.989961 kubelet[3294]: E0421 12:03:40.989897 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:42.990521 kubelet[3294]: E0421 12:03:42.990444 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:44.991257 kubelet[3294]: E0421 12:03:44.990454 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:46.990700 kubelet[3294]: E0421 12:03:46.990613 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:47.330088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357035963.mount: Deactivated successfully. Apr 21 12:03:47.375395 containerd[1737]: time="2026-04-21T12:03:47.375332071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:47.378224 containerd[1737]: time="2026-04-21T12:03:47.378072517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 12:03:47.382472 containerd[1737]: time="2026-04-21T12:03:47.382401890Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:47.387903 containerd[1737]: time="2026-04-21T12:03:47.387276972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:47.388216 containerd[1737]: time="2026-04-21T12:03:47.388169987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.285474247s" Apr 21 12:03:47.388347 containerd[1737]: time="2026-04-21T12:03:47.388324189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 12:03:47.396119 containerd[1737]: time="2026-04-21T12:03:47.396082420Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 12:03:47.448737 containerd[1737]: time="2026-04-21T12:03:47.448686905Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205\"" Apr 21 12:03:47.449282 containerd[1737]: time="2026-04-21T12:03:47.449196014Z" level=info msg="StartContainer for \"813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205\"" Apr 21 12:03:47.486001 systemd[1]: Started cri-containerd-813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205.scope - libcontainer container 813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205. Apr 21 12:03:47.520454 containerd[1737]: time="2026-04-21T12:03:47.520406613Z" level=info msg="StartContainer for \"813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205\" returns successfully" Apr 21 12:03:47.560858 systemd[1]: cri-containerd-813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205.scope: Deactivated successfully. Apr 21 12:03:48.329577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205-rootfs.mount: Deactivated successfully. Apr 21 12:03:48.990410 kubelet[3294]: E0421 12:03:48.990367 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:50.827491 containerd[1737]: time="2026-04-21T12:03:50.827413070Z" level=info msg="shim disconnected" id=813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205 namespace=k8s.io Apr 21 12:03:50.827491 containerd[1737]: time="2026-04-21T12:03:50.827485671Z" level=warning msg="cleaning up after shim disconnected" id=813433bd94c7204e4fd5aa9fae68ed2a0bee8d2044abef855d598ddb3476d205 namespace=k8s.io Apr 21 12:03:50.827491 containerd[1737]: time="2026-04-21T12:03:50.827499571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 12:03:50.990160 kubelet[3294]: E0421 12:03:50.990093 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:51.136851 containerd[1737]: time="2026-04-21T12:03:51.135971653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 12:03:52.989996 kubelet[3294]: E0421 12:03:52.989880 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:54.990225 kubelet[3294]: E0421 12:03:54.990174 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:56.990509 kubelet[3294]: E0421 12:03:56.990455 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:58.657677 containerd[1737]: time="2026-04-21T12:03:58.657623474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:58.701876 containerd[1737]: time="2026-04-21T12:03:58.701624894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 12:03:58.705018 containerd[1737]: time="2026-04-21T12:03:58.704948748Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:58.755253 containerd[1737]: time="2026-04-21T12:03:58.755173470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:03:58.756306 containerd[1737]: time="2026-04-21T12:03:58.756160386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 7.620139832s" Apr 21 12:03:58.756306 containerd[1737]: time="2026-04-21T12:03:58.756203487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 12:03:58.804562 containerd[1737]: time="2026-04-21T12:03:58.804515977Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 12:03:58.990792 kubelet[3294]: E0421 12:03:58.990737 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:03:59.114180 containerd[1737]: time="2026-04-21T12:03:59.114120343Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2\"" Apr 21 12:03:59.114952 containerd[1737]: time="2026-04-21T12:03:59.114799454Z" level=info msg="StartContainer for \"1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2\"" Apr 21 12:03:59.161024 systemd[1]: Started cri-containerd-1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2.scope - libcontainer container 1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2. Apr 21 12:03:59.192375 containerd[1737]: time="2026-04-21T12:03:59.192068218Z" level=info msg="StartContainer for \"1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2\" returns successfully" Apr 21 12:04:00.990140 kubelet[3294]: E0421 12:04:00.990081 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:04:02.990335 kubelet[3294]: E0421 12:04:02.990265 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:04:04.990451 kubelet[3294]: E0421 12:04:04.990387 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:04:06.990000 kubelet[3294]: E0421 12:04:06.989939 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:04:08.990268 kubelet[3294]: E0421 12:04:08.990200 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:04:10.083673 containerd[1737]: time="2026-04-21T12:04:10.083604843Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 12:04:10.086061 systemd[1]: cri-containerd-1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2.scope: Deactivated successfully. Apr 21 12:04:10.110425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2-rootfs.mount: Deactivated successfully. Apr 21 12:04:10.162848 kubelet[3294]: I0421 12:04:10.162806 3294 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 21 12:04:12.426991 systemd[1]: Created slice kubepods-burstable-pod4894f806_fc13_40ed_9637_207c1d2b6fe3.slice - libcontainer container kubepods-burstable-pod4894f806_fc13_40ed_9637_207c1d2b6fe3.slice. Apr 21 12:04:12.429608 containerd[1737]: time="2026-04-21T12:04:12.429367312Z" level=info msg="shim disconnected" id=1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2 namespace=k8s.io Apr 21 12:04:12.431901 containerd[1737]: time="2026-04-21T12:04:12.429573216Z" level=warning msg="cleaning up after shim disconnected" id=1b8e2250a0e09c2eeb6346c2a9622808884112367ef18b3f088995d9b9763de2 namespace=k8s.io Apr 21 12:04:12.431901 containerd[1737]: time="2026-04-21T12:04:12.430055924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 12:04:12.447432 systemd[1]: Created slice kubepods-besteffort-pod20e52a3b_9a02_45ad_96fe_fb214b6cbb05.slice - libcontainer container kubepods-besteffort-pod20e52a3b_9a02_45ad_96fe_fb214b6cbb05.slice. Apr 21 12:04:12.451356 containerd[1737]: time="2026-04-21T12:04:12.451283982Z" level=warning msg="cleanup warnings time=\"2026-04-21T12:04:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 12:04:12.459198 systemd[1]: Created slice kubepods-burstable-pode240fd29_afe5_4e92_98bd_6ce65bc08a12.slice - libcontainer container kubepods-burstable-pode240fd29_afe5_4e92_98bd_6ce65bc08a12.slice. Apr 21 12:04:12.466849 containerd[1737]: time="2026-04-21T12:04:12.465998030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqkhk,Uid:20e52a3b-9a02-45ad-96fe-fb214b6cbb05,Namespace:calico-system,Attempt:0,}" Apr 21 12:04:12.475124 systemd[1]: Created slice kubepods-besteffort-pod4c031d9c_a941_4ee9_ab73_567a7398ad1c.slice - libcontainer container kubepods-besteffort-pod4c031d9c_a941_4ee9_ab73_567a7398ad1c.slice. Apr 21 12:04:12.486336 systemd[1]: Created slice kubepods-besteffort-pod9752a3e6_7576_4285_9a83_6bc365a16d48.slice - libcontainer container kubepods-besteffort-pod9752a3e6_7576_4285_9a83_6bc365a16d48.slice. Apr 21 12:04:12.491856 kubelet[3294]: I0421 12:04:12.489734 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4894f806-fc13-40ed-9637-207c1d2b6fe3-config-volume\") pod \"coredns-66bc5c9577-4bsq7\" (UID: \"4894f806-fc13-40ed-9637-207c1d2b6fe3\") " pod="kube-system/coredns-66bc5c9577-4bsq7" Apr 21 12:04:12.491856 kubelet[3294]: I0421 12:04:12.489781 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w799d\" (UniqueName: \"kubernetes.io/projected/4894f806-fc13-40ed-9637-207c1d2b6fe3-kube-api-access-w799d\") pod \"coredns-66bc5c9577-4bsq7\" (UID: \"4894f806-fc13-40ed-9637-207c1d2b6fe3\") " pod="kube-system/coredns-66bc5c9577-4bsq7" Apr 21 12:04:12.496315 systemd[1]: Created slice kubepods-besteffort-pod8f4b8722_a01d_4ba7_afd8_a88d111a2e76.slice - libcontainer container kubepods-besteffort-pod8f4b8722_a01d_4ba7_afd8_a88d111a2e76.slice. Apr 21 12:04:12.503601 systemd[1]: Created slice kubepods-besteffort-pod61238341_a297_4aa5_a969_871e84b67daf.slice - libcontainer container kubepods-besteffort-pod61238341_a297_4aa5_a969_871e84b67daf.slice. Apr 21 12:04:12.532671 systemd[1]: Created slice kubepods-besteffort-pod22bf69eb_1b19_496c_9e8c_76911e03643c.slice - libcontainer container kubepods-besteffort-pod22bf69eb_1b19_496c_9e8c_76911e03643c.slice. Apr 21 12:04:12.593643 kubelet[3294]: I0421 12:04:12.593356 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njphw\" (UniqueName: \"kubernetes.io/projected/8f4b8722-a01d-4ba7-afd8-a88d111a2e76-kube-api-access-njphw\") pod \"goldmane-cccfbd5cf-bzdfb\" (UID: \"8f4b8722-a01d-4ba7-afd8-a88d111a2e76\") " pod="calico-system/goldmane-cccfbd5cf-bzdfb" Apr 21 12:04:12.594767 kubelet[3294]: I0421 12:04:12.593682 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61238341-a297-4aa5-a969-871e84b67daf-whisker-backend-key-pair\") pod \"whisker-68f656b4d6-v9c6d\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " pod="calico-system/whisker-68f656b4d6-v9c6d" Apr 21 12:04:12.594767 kubelet[3294]: I0421 12:04:12.593714 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxmf\" (UniqueName: \"kubernetes.io/projected/4c031d9c-a941-4ee9-ab73-567a7398ad1c-kube-api-access-6sxmf\") pod \"calico-apiserver-7c79d9c885-h4f6q\" (UID: \"4c031d9c-a941-4ee9-ab73-567a7398ad1c\") " pod="calico-system/calico-apiserver-7c79d9c885-h4f6q" Apr 21 12:04:12.594767 kubelet[3294]: I0421 12:04:12.593791 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f4b8722-a01d-4ba7-afd8-a88d111a2e76-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-bzdfb\" (UID: \"8f4b8722-a01d-4ba7-afd8-a88d111a2e76\") " pod="calico-system/goldmane-cccfbd5cf-bzdfb" Apr 21 12:04:12.594767 kubelet[3294]: I0421 12:04:12.593816 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8f4b8722-a01d-4ba7-afd8-a88d111a2e76-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-bzdfb\" (UID: \"8f4b8722-a01d-4ba7-afd8-a88d111a2e76\") " pod="calico-system/goldmane-cccfbd5cf-bzdfb" Apr 21 12:04:12.594767 kubelet[3294]: I0421 12:04:12.593880 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e240fd29-afe5-4e92-98bd-6ce65bc08a12-config-volume\") pod \"coredns-66bc5c9577-g54k5\" (UID: \"e240fd29-afe5-4e92-98bd-6ce65bc08a12\") " pod="kube-system/coredns-66bc5c9577-g54k5" Apr 21 12:04:12.595026 kubelet[3294]: I0421 12:04:12.594875 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c031d9c-a941-4ee9-ab73-567a7398ad1c-calico-apiserver-certs\") pod \"calico-apiserver-7c79d9c885-h4f6q\" (UID: \"4c031d9c-a941-4ee9-ab73-567a7398ad1c\") " pod="calico-system/calico-apiserver-7c79d9c885-h4f6q" Apr 21 12:04:12.595026 kubelet[3294]: I0421 12:04:12.594909 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-nginx-config\") pod \"whisker-68f656b4d6-v9c6d\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " pod="calico-system/whisker-68f656b4d6-v9c6d" Apr 21 12:04:12.595026 kubelet[3294]: I0421 12:04:12.594932 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/61238341-a297-4aa5-a969-871e84b67daf-kube-api-access-fvsfd\") pod \"whisker-68f656b4d6-v9c6d\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " pod="calico-system/whisker-68f656b4d6-v9c6d" Apr 21 12:04:12.595026 kubelet[3294]: I0421 12:04:12.594956 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22bf69eb-1b19-496c-9e8c-76911e03643c-calico-apiserver-certs\") pod \"calico-apiserver-7c79d9c885-tjxlz\" (UID: \"22bf69eb-1b19-496c-9e8c-76911e03643c\") " pod="calico-system/calico-apiserver-7c79d9c885-tjxlz" Apr 21 12:04:12.595026 kubelet[3294]: I0421 12:04:12.595001 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4b8722-a01d-4ba7-afd8-a88d111a2e76-config\") pod \"goldmane-cccfbd5cf-bzdfb\" (UID: \"8f4b8722-a01d-4ba7-afd8-a88d111a2e76\") " pod="calico-system/goldmane-cccfbd5cf-bzdfb" Apr 21 12:04:12.595237 kubelet[3294]: I0421 12:04:12.595025 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2c49\" (UniqueName: \"kubernetes.io/projected/e240fd29-afe5-4e92-98bd-6ce65bc08a12-kube-api-access-p2c49\") pod \"coredns-66bc5c9577-g54k5\" (UID: \"e240fd29-afe5-4e92-98bd-6ce65bc08a12\") " pod="kube-system/coredns-66bc5c9577-g54k5" Apr 21 12:04:12.595237 kubelet[3294]: I0421 12:04:12.595047 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9752a3e6-7576-4285-9a83-6bc365a16d48-tigera-ca-bundle\") pod \"calico-kube-controllers-76d9fbb898-hf2cc\" (UID: \"9752a3e6-7576-4285-9a83-6bc365a16d48\") " pod="calico-system/calico-kube-controllers-76d9fbb898-hf2cc" Apr 21 12:04:12.595237 kubelet[3294]: I0421 12:04:12.595073 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9mm\" (UniqueName: \"kubernetes.io/projected/9752a3e6-7576-4285-9a83-6bc365a16d48-kube-api-access-pc9mm\") pod \"calico-kube-controllers-76d9fbb898-hf2cc\" (UID: \"9752a3e6-7576-4285-9a83-6bc365a16d48\") " pod="calico-system/calico-kube-controllers-76d9fbb898-hf2cc" Apr 21 12:04:12.595237 kubelet[3294]: I0421 12:04:12.595103 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dwq\" (UniqueName: \"kubernetes.io/projected/22bf69eb-1b19-496c-9e8c-76911e03643c-kube-api-access-t5dwq\") pod \"calico-apiserver-7c79d9c885-tjxlz\" (UID: \"22bf69eb-1b19-496c-9e8c-76911e03643c\") " pod="calico-system/calico-apiserver-7c79d9c885-tjxlz" Apr 21 12:04:12.595237 kubelet[3294]: I0421 12:04:12.595128 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-whisker-ca-bundle\") pod \"whisker-68f656b4d6-v9c6d\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " pod="calico-system/whisker-68f656b4d6-v9c6d" Apr 21 12:04:12.618585 containerd[1737]: time="2026-04-21T12:04:12.618529801Z" level=error msg="Failed to destroy network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.621472 containerd[1737]: time="2026-04-21T12:04:12.621295248Z" level=error msg="encountered an error cleaning up failed sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.621472 containerd[1737]: time="2026-04-21T12:04:12.621369749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqkhk,Uid:20e52a3b-9a02-45ad-96fe-fb214b6cbb05,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.621809 kubelet[3294]: E0421 12:04:12.621748 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.622082 kubelet[3294]: E0421 12:04:12.621947 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mqkhk" Apr 21 12:04:12.621998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e-shm.mount: Deactivated successfully. Apr 21 12:04:12.623160 kubelet[3294]: E0421 12:04:12.621976 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mqkhk" Apr 21 12:04:12.623377 kubelet[3294]: E0421 12:04:12.623113 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mqkhk_calico-system(20e52a3b-9a02-45ad-96fe-fb214b6cbb05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mqkhk_calico-system(20e52a3b-9a02-45ad-96fe-fb214b6cbb05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:04:12.744636 containerd[1737]: time="2026-04-21T12:04:12.744266021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bsq7,Uid:4894f806-fc13-40ed-9637-207c1d2b6fe3,Namespace:kube-system,Attempt:0,}" Apr 21 12:04:12.770615 containerd[1737]: time="2026-04-21T12:04:12.770570064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g54k5,Uid:e240fd29-afe5-4e92-98bd-6ce65bc08a12,Namespace:kube-system,Attempt:0,}" Apr 21 12:04:12.794080 containerd[1737]: time="2026-04-21T12:04:12.794034860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-h4f6q,Uid:4c031d9c-a941-4ee9-ab73-567a7398ad1c,Namespace:calico-system,Attempt:0,}" Apr 21 12:04:12.801846 containerd[1737]: time="2026-04-21T12:04:12.800908676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d9fbb898-hf2cc,Uid:9752a3e6-7576-4285-9a83-6bc365a16d48,Namespace:calico-system,Attempt:0,}" Apr 21 12:04:12.806411 containerd[1737]: time="2026-04-21T12:04:12.806376868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bzdfb,Uid:8f4b8722-a01d-4ba7-afd8-a88d111a2e76,Namespace:calico-system,Attempt:0,}" Apr 21 12:04:12.831433 containerd[1737]: time="2026-04-21T12:04:12.831298588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f656b4d6-v9c6d,Uid:61238341-a297-4aa5-a969-871e84b67daf,Namespace:calico-system,Attempt:0,}" Apr 21 12:04:12.843525 containerd[1737]: time="2026-04-21T12:04:12.843252590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-tjxlz,Uid:22bf69eb-1b19-496c-9e8c-76911e03643c,Namespace:calico-system,Attempt:0,}" Apr 21 12:04:12.849425 containerd[1737]: time="2026-04-21T12:04:12.849379493Z" level=error msg="Failed to destroy network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.849711 containerd[1737]: time="2026-04-21T12:04:12.849679498Z" level=error msg="encountered an error cleaning up failed sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.849792 containerd[1737]: time="2026-04-21T12:04:12.849744599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bsq7,Uid:4894f806-fc13-40ed-9637-207c1d2b6fe3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.850002 kubelet[3294]: E0421 12:04:12.849972 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.850094 kubelet[3294]: E0421 12:04:12.850025 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4bsq7" Apr 21 12:04:12.850094 kubelet[3294]: E0421 12:04:12.850053 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4bsq7" Apr 21 12:04:12.850181 kubelet[3294]: E0421 12:04:12.850121 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4bsq7_kube-system(4894f806-fc13-40ed-9637-207c1d2b6fe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4bsq7_kube-system(4894f806-fc13-40ed-9637-207c1d2b6fe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4bsq7" podUID="4894f806-fc13-40ed-9637-207c1d2b6fe3" Apr 21 12:04:12.943744 containerd[1737]: time="2026-04-21T12:04:12.943688283Z" level=error msg="Failed to destroy network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.944082 containerd[1737]: time="2026-04-21T12:04:12.944044089Z" level=error msg="encountered an error cleaning up failed sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.944228 containerd[1737]: time="2026-04-21T12:04:12.944113890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g54k5,Uid:e240fd29-afe5-4e92-98bd-6ce65bc08a12,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.944375 kubelet[3294]: E0421 12:04:12.944333 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.944477 kubelet[3294]: E0421 12:04:12.944404 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g54k5" Apr 21 12:04:12.944477 kubelet[3294]: E0421 12:04:12.944434 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-g54k5" Apr 21 12:04:12.944584 kubelet[3294]: E0421 12:04:12.944501 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-g54k5_kube-system(e240fd29-afe5-4e92-98bd-6ce65bc08a12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-g54k5_kube-system(e240fd29-afe5-4e92-98bd-6ce65bc08a12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-g54k5" podUID="e240fd29-afe5-4e92-98bd-6ce65bc08a12" Apr 21 12:04:12.997879 containerd[1737]: time="2026-04-21T12:04:12.997611092Z" level=error msg="Failed to destroy network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.998382 containerd[1737]: time="2026-04-21T12:04:12.998350604Z" level=error msg="encountered an error cleaning up failed sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.998586 containerd[1737]: time="2026-04-21T12:04:12.998529207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-h4f6q,Uid:4c031d9c-a941-4ee9-ab73-567a7398ad1c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.999489 kubelet[3294]: E0421 12:04:12.999368 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:12.999489 kubelet[3294]: E0421 12:04:12.999438 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c79d9c885-h4f6q" Apr 21 12:04:12.999489 kubelet[3294]: E0421 12:04:12.999466 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c79d9c885-h4f6q" Apr 21 12:04:13.002886 kubelet[3294]: E0421 12:04:12.999549 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c79d9c885-h4f6q_calico-system(4c031d9c-a941-4ee9-ab73-567a7398ad1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c79d9c885-h4f6q_calico-system(4c031d9c-a941-4ee9-ab73-567a7398ad1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c79d9c885-h4f6q" podUID="4c031d9c-a941-4ee9-ab73-567a7398ad1c" Apr 21 12:04:13.094018 containerd[1737]: time="2026-04-21T12:04:13.093963416Z" level=error msg="Failed to destroy network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.094599 containerd[1737]: time="2026-04-21T12:04:13.094557726Z" level=error msg="encountered an error cleaning up failed sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.094802 containerd[1737]: time="2026-04-21T12:04:13.094775830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bzdfb,Uid:8f4b8722-a01d-4ba7-afd8-a88d111a2e76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.095237 kubelet[3294]: E0421 12:04:13.095104 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.095237 kubelet[3294]: E0421 12:04:13.095166 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-bzdfb" Apr 21 12:04:13.095237 kubelet[3294]: E0421 12:04:13.095196 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-bzdfb" Apr 21 12:04:13.095634 kubelet[3294]: E0421 12:04:13.095267 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-bzdfb_calico-system(8f4b8722-a01d-4ba7-afd8-a88d111a2e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-bzdfb_calico-system(8f4b8722-a01d-4ba7-afd8-a88d111a2e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-bzdfb" podUID="8f4b8722-a01d-4ba7-afd8-a88d111a2e76" Apr 21 12:04:13.127075 containerd[1737]: time="2026-04-21T12:04:13.127030174Z" level=error msg="Failed to destroy network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.128027 containerd[1737]: time="2026-04-21T12:04:13.127523482Z" level=error msg="encountered an error cleaning up failed sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.128027 containerd[1737]: time="2026-04-21T12:04:13.127612283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d9fbb898-hf2cc,Uid:9752a3e6-7576-4285-9a83-6bc365a16d48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.128248 kubelet[3294]: E0421 12:04:13.127923 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.128663 kubelet[3294]: E0421 12:04:13.128434 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76d9fbb898-hf2cc" Apr 21 12:04:13.128663 kubelet[3294]: E0421 12:04:13.128510 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76d9fbb898-hf2cc" Apr 21 12:04:13.128663 kubelet[3294]: E0421 12:04:13.128597 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76d9fbb898-hf2cc_calico-system(9752a3e6-7576-4285-9a83-6bc365a16d48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76d9fbb898-hf2cc_calico-system(9752a3e6-7576-4285-9a83-6bc365a16d48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76d9fbb898-hf2cc" podUID="9752a3e6-7576-4285-9a83-6bc365a16d48" Apr 21 12:04:13.157717 containerd[1737]: time="2026-04-21T12:04:13.157664890Z" level=error msg="Failed to destroy network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.158055 containerd[1737]: time="2026-04-21T12:04:13.158025196Z" level=error msg="encountered an error cleaning up failed sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.158146 containerd[1737]: time="2026-04-21T12:04:13.158082497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-tjxlz,Uid:22bf69eb-1b19-496c-9e8c-76911e03643c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.158758 kubelet[3294]: E0421 12:04:13.158355 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.158758 kubelet[3294]: E0421 12:04:13.158419 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c79d9c885-tjxlz" Apr 21 12:04:13.158758 kubelet[3294]: E0421 12:04:13.158443 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c79d9c885-tjxlz" Apr 21 12:04:13.158966 kubelet[3294]: E0421 12:04:13.158509 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c79d9c885-tjxlz_calico-system(22bf69eb-1b19-496c-9e8c-76911e03643c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c79d9c885-tjxlz_calico-system(22bf69eb-1b19-496c-9e8c-76911e03643c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c79d9c885-tjxlz" podUID="22bf69eb-1b19-496c-9e8c-76911e03643c" Apr 21 12:04:13.161364 containerd[1737]: time="2026-04-21T12:04:13.161314451Z" level=error msg="Failed to destroy network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.161700 containerd[1737]: time="2026-04-21T12:04:13.161662457Z" level=error msg="encountered an error cleaning up failed sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.161802 containerd[1737]: time="2026-04-21T12:04:13.161712658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f656b4d6-v9c6d,Uid:61238341-a297-4aa5-a969-871e84b67daf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.161984 kubelet[3294]: E0421 12:04:13.161921 3294 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.161984 kubelet[3294]: E0421 12:04:13.161968 3294 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68f656b4d6-v9c6d" Apr 21 12:04:13.162099 kubelet[3294]: E0421 12:04:13.161991 3294 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68f656b4d6-v9c6d" Apr 21 12:04:13.162099 kubelet[3294]: E0421 12:04:13.162045 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68f656b4d6-v9c6d_calico-system(61238341-a297-4aa5-a969-871e84b67daf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68f656b4d6-v9c6d_calico-system(61238341-a297-4aa5-a969-871e84b67daf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68f656b4d6-v9c6d" podUID="61238341-a297-4aa5-a969-871e84b67daf" Apr 21 12:04:13.185740 kubelet[3294]: I0421 12:04:13.185712 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:04:13.186846 containerd[1737]: time="2026-04-21T12:04:13.186576477Z" level=info msg="StopPodSandbox for \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\"" Apr 21 12:04:13.187139 containerd[1737]: time="2026-04-21T12:04:13.186821181Z" level=info msg="Ensure that sandbox 5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc in task-service has been cleanup successfully" Apr 21 12:04:13.189503 kubelet[3294]: I0421 12:04:13.188934 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:04:13.189609 containerd[1737]: time="2026-04-21T12:04:13.189582528Z" level=info msg="StopPodSandbox for \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\"" Apr 21 12:04:13.189985 containerd[1737]: time="2026-04-21T12:04:13.189910234Z" level=info msg="Ensure that sandbox 0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e in task-service has been cleanup successfully" Apr 21 12:04:13.207179 kubelet[3294]: I0421 12:04:13.207155 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:04:13.213881 containerd[1737]: time="2026-04-21T12:04:13.210734885Z" level=info msg="StopPodSandbox for \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\"" Apr 21 12:04:13.218548 containerd[1737]: time="2026-04-21T12:04:13.217931406Z" level=info msg="Ensure that sandbox 1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2 in task-service has been cleanup successfully" Apr 21 12:04:13.223235 kubelet[3294]: I0421 12:04:13.223209 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:04:13.227691 containerd[1737]: time="2026-04-21T12:04:13.227572468Z" level=info msg="StopPodSandbox for \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\"" Apr 21 12:04:13.228300 containerd[1737]: time="2026-04-21T12:04:13.227936775Z" level=info msg="Ensure that sandbox 8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25 in task-service has been cleanup successfully" Apr 21 12:04:13.230585 containerd[1737]: time="2026-04-21T12:04:13.230441417Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 12:04:13.230933 kubelet[3294]: I0421 12:04:13.230876 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:04:13.232536 containerd[1737]: time="2026-04-21T12:04:13.232415650Z" level=info msg="StopPodSandbox for \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\"" Apr 21 12:04:13.232721 containerd[1737]: time="2026-04-21T12:04:13.232587053Z" level=info msg="Ensure that sandbox 6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a in task-service has been cleanup successfully" Apr 21 12:04:13.245285 kubelet[3294]: I0421 12:04:13.245180 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:04:13.247804 containerd[1737]: time="2026-04-21T12:04:13.247422303Z" level=info msg="StopPodSandbox for \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\"" Apr 21 12:04:13.247804 containerd[1737]: time="2026-04-21T12:04:13.247605406Z" level=info msg="Ensure that sandbox 6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6 in task-service has been cleanup successfully" Apr 21 12:04:13.254516 kubelet[3294]: I0421 12:04:13.254419 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:04:13.259190 containerd[1737]: time="2026-04-21T12:04:13.259161101Z" level=info msg="StopPodSandbox for \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\"" Apr 21 12:04:13.259691 containerd[1737]: time="2026-04-21T12:04:13.259452906Z" level=info msg="Ensure that sandbox c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a in task-service has been cleanup successfully" Apr 21 12:04:13.268260 kubelet[3294]: I0421 12:04:13.268232 3294 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:04:13.274647 containerd[1737]: time="2026-04-21T12:04:13.274141654Z" level=info msg="StopPodSandbox for \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\"" Apr 21 12:04:13.274748 containerd[1737]: time="2026-04-21T12:04:13.274654662Z" level=info msg="Ensure that sandbox 86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3 in task-service has been cleanup successfully" Apr 21 12:04:13.309689 containerd[1737]: time="2026-04-21T12:04:13.309617552Z" level=error msg="StopPodSandbox for \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\" failed" error="failed to destroy network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.312538 kubelet[3294]: E0421 12:04:13.311978 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:04:13.312538 kubelet[3294]: E0421 12:04:13.312033 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e"} Apr 21 12:04:13.312538 kubelet[3294]: E0421 12:04:13.312093 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20e52a3b-9a02-45ad-96fe-fb214b6cbb05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.312538 kubelet[3294]: E0421 12:04:13.312129 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20e52a3b-9a02-45ad-96fe-fb214b6cbb05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mqkhk" podUID="20e52a3b-9a02-45ad-96fe-fb214b6cbb05" Apr 21 12:04:13.323031 containerd[1737]: time="2026-04-21T12:04:13.322978477Z" level=error msg="StopPodSandbox for \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\" failed" error="failed to destroy network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.323585 kubelet[3294]: E0421 12:04:13.323251 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:04:13.323585 kubelet[3294]: E0421 12:04:13.323299 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc"} Apr 21 12:04:13.323585 kubelet[3294]: E0421 12:04:13.323337 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9752a3e6-7576-4285-9a83-6bc365a16d48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.323585 kubelet[3294]: E0421 12:04:13.323369 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9752a3e6-7576-4285-9a83-6bc365a16d48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76d9fbb898-hf2cc" podUID="9752a3e6-7576-4285-9a83-6bc365a16d48" Apr 21 12:04:13.326368 containerd[1737]: time="2026-04-21T12:04:13.326325233Z" level=info msg="CreateContainer within sandbox \"d56396b7d8444082a1eb069c2173edac3c32b0eec1692970f6d0084218834191\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"651f3a827af559682689f4fb6930b320384e62d1ab8fed07e95997473345c044\"" Apr 21 12:04:13.327129 containerd[1737]: time="2026-04-21T12:04:13.327102446Z" level=info msg="StartContainer for \"651f3a827af559682689f4fb6930b320384e62d1ab8fed07e95997473345c044\"" Apr 21 12:04:13.353084 containerd[1737]: time="2026-04-21T12:04:13.353023983Z" level=error msg="StopPodSandbox for \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\" failed" error="failed to destroy network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.353910 kubelet[3294]: E0421 12:04:13.353564 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:04:13.353910 kubelet[3294]: E0421 12:04:13.353620 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a"} Apr 21 12:04:13.353910 kubelet[3294]: E0421 12:04:13.353663 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e240fd29-afe5-4e92-98bd-6ce65bc08a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.353910 kubelet[3294]: E0421 12:04:13.353701 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e240fd29-afe5-4e92-98bd-6ce65bc08a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-g54k5" podUID="e240fd29-afe5-4e92-98bd-6ce65bc08a12" Apr 21 12:04:13.371028 containerd[1737]: time="2026-04-21T12:04:13.370974186Z" level=error msg="StopPodSandbox for \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\" failed" error="failed to destroy network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.371527 kubelet[3294]: E0421 12:04:13.371244 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:04:13.371527 kubelet[3294]: E0421 12:04:13.371314 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2"} Apr 21 12:04:13.371527 kubelet[3294]: E0421 12:04:13.371353 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22bf69eb-1b19-496c-9e8c-76911e03643c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.371785 kubelet[3294]: E0421 12:04:13.371747 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22bf69eb-1b19-496c-9e8c-76911e03643c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c79d9c885-tjxlz" podUID="22bf69eb-1b19-496c-9e8c-76911e03643c" Apr 21 12:04:13.394848 containerd[1737]: time="2026-04-21T12:04:13.394601184Z" level=error msg="StopPodSandbox for \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\" failed" error="failed to destroy network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.395413 kubelet[3294]: E0421 12:04:13.395319 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:04:13.395413 kubelet[3294]: E0421 12:04:13.395384 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25"} Apr 21 12:04:13.395554 kubelet[3294]: E0421 12:04:13.395419 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61238341-a297-4aa5-a969-871e84b67daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.395554 kubelet[3294]: E0421 12:04:13.395458 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61238341-a297-4aa5-a969-871e84b67daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68f656b4d6-v9c6d" podUID="61238341-a297-4aa5-a969-871e84b67daf" Apr 21 12:04:13.407923 containerd[1737]: time="2026-04-21T12:04:13.407645804Z" level=error msg="StopPodSandbox for \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\" failed" error="failed to destroy network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.411856 kubelet[3294]: E0421 12:04:13.411667 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:04:13.411856 kubelet[3294]: E0421 12:04:13.411721 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6"} Apr 21 12:04:13.411856 kubelet[3294]: E0421 12:04:13.411758 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f4b8722-a01d-4ba7-afd8-a88d111a2e76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.411856 kubelet[3294]: E0421 12:04:13.411790 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f4b8722-a01d-4ba7-afd8-a88d111a2e76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-bzdfb" podUID="8f4b8722-a01d-4ba7-afd8-a88d111a2e76" Apr 21 12:04:13.412586 containerd[1737]: time="2026-04-21T12:04:13.412375784Z" level=error msg="StopPodSandbox for \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\" failed" error="failed to destroy network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.413061 kubelet[3294]: E0421 12:04:13.412909 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:04:13.413061 kubelet[3294]: E0421 12:04:13.412966 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a"} Apr 21 12:04:13.413061 kubelet[3294]: E0421 12:04:13.413001 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c031d9c-a941-4ee9-ab73-567a7398ad1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.413061 kubelet[3294]: E0421 12:04:13.413030 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c031d9c-a941-4ee9-ab73-567a7398ad1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c79d9c885-h4f6q" podUID="4c031d9c-a941-4ee9-ab73-567a7398ad1c" Apr 21 12:04:13.417985 containerd[1737]: time="2026-04-21T12:04:13.417941478Z" level=error msg="StopPodSandbox for \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\" failed" error="failed to destroy network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 12:04:13.418152 kubelet[3294]: E0421 12:04:13.418111 3294 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:04:13.418152 kubelet[3294]: E0421 12:04:13.418148 3294 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3"} Apr 21 12:04:13.418263 kubelet[3294]: E0421 12:04:13.418180 3294 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4894f806-fc13-40ed-9637-207c1d2b6fe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 12:04:13.418263 kubelet[3294]: E0421 12:04:13.418208 3294 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4894f806-fc13-40ed-9637-207c1d2b6fe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4bsq7" podUID="4894f806-fc13-40ed-9637-207c1d2b6fe3" Apr 21 12:04:13.431018 systemd[1]: Started cri-containerd-651f3a827af559682689f4fb6930b320384e62d1ab8fed07e95997473345c044.scope - libcontainer container 651f3a827af559682689f4fb6930b320384e62d1ab8fed07e95997473345c044. Apr 21 12:04:13.466837 containerd[1737]: time="2026-04-21T12:04:13.466794201Z" level=info msg="StartContainer for \"651f3a827af559682689f4fb6930b320384e62d1ab8fed07e95997473345c044\" returns successfully" Apr 21 12:04:14.275446 containerd[1737]: time="2026-04-21T12:04:14.275193329Z" level=info msg="StopPodSandbox for \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\"" Apr 21 12:04:14.322680 systemd[1]: run-containerd-runc-k8s.io-651f3a827af559682689f4fb6930b320384e62d1ab8fed07e95997473345c044-runc.Cfnpat.mount: Deactivated successfully. Apr 21 12:04:14.335917 kubelet[3294]: I0421 12:04:14.335857 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s6b6h" podStartSLOduration=16.758142024 podStartE2EDuration="44.335822351s" podCreationTimestamp="2026-04-21 12:03:30 +0000 UTC" firstStartedPulling="2026-04-21 12:03:31.179428074 +0000 UTC m=+27.304360560" lastFinishedPulling="2026-04-21 12:03:58.757108401 +0000 UTC m=+54.882040887" observedRunningTime="2026-04-21 12:04:14.33512934 +0000 UTC m=+70.460061926" watchObservedRunningTime="2026-04-21 12:04:14.335822351 +0000 UTC m=+70.460754837" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.380 [INFO][4581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.381 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" iface="eth0" netns="/var/run/netns/cni-5ebe69ac-fa63-0aef-02cd-d32ce9db8502" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.381 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" iface="eth0" netns="/var/run/netns/cni-5ebe69ac-fa63-0aef-02cd-d32ce9db8502" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.381 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" iface="eth0" netns="/var/run/netns/cni-5ebe69ac-fa63-0aef-02cd-d32ce9db8502" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.381 [INFO][4581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.381 [INFO][4581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.413 [INFO][4611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.414 [INFO][4611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.415 [INFO][4611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.425 [WARNING][4611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.425 [INFO][4611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.426 [INFO][4611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:14.431039 containerd[1737]: 2026-04-21 12:04:14.429 [INFO][4581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:04:14.433950 containerd[1737]: time="2026-04-21T12:04:14.432950189Z" level=info msg="TearDown network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\" successfully" Apr 21 12:04:14.433950 containerd[1737]: time="2026-04-21T12:04:14.432989689Z" level=info msg="StopPodSandbox for \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\" returns successfully" Apr 21 12:04:14.435982 systemd[1]: run-netns-cni\x2d5ebe69ac\x2dfa63\x2d0aef\x2d02cd\x2dd32ce9db8502.mount: Deactivated successfully. Apr 21 12:04:14.612954 kubelet[3294]: I0421 12:04:14.612311 3294 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-whisker-ca-bundle\") pod \"61238341-a297-4aa5-a969-871e84b67daf\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " Apr 21 12:04:14.612954 kubelet[3294]: I0421 12:04:14.612387 3294 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-nginx-config\") pod \"61238341-a297-4aa5-a969-871e84b67daf\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " Apr 21 12:04:14.612954 kubelet[3294]: I0421 12:04:14.612448 3294 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61238341-a297-4aa5-a969-871e84b67daf-whisker-backend-key-pair\") pod \"61238341-a297-4aa5-a969-871e84b67daf\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " Apr 21 12:04:14.612954 kubelet[3294]: I0421 12:04:14.612473 3294 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/61238341-a297-4aa5-a969-871e84b67daf-kube-api-access-fvsfd\") pod \"61238341-a297-4aa5-a969-871e84b67daf\" (UID: \"61238341-a297-4aa5-a969-871e84b67daf\") " Apr 21 12:04:14.614152 kubelet[3294]: I0421 12:04:14.612851 3294 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "61238341-a297-4aa5-a969-871e84b67daf" (UID: "61238341-a297-4aa5-a969-871e84b67daf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 12:04:14.615259 kubelet[3294]: I0421 12:04:14.614639 3294 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "61238341-a297-4aa5-a969-871e84b67daf" (UID: "61238341-a297-4aa5-a969-871e84b67daf"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 12:04:14.617439 kubelet[3294]: I0421 12:04:14.617402 3294 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61238341-a297-4aa5-a969-871e84b67daf-kube-api-access-fvsfd" (OuterVolumeSpecName: "kube-api-access-fvsfd") pod "61238341-a297-4aa5-a969-871e84b67daf" (UID: "61238341-a297-4aa5-a969-871e84b67daf"). InnerVolumeSpecName "kube-api-access-fvsfd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 12:04:14.619439 systemd[1]: var-lib-kubelet-pods-61238341\x2da297\x2d4aa5\x2da969\x2d871e84b67daf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvsfd.mount: Deactivated successfully. Apr 21 12:04:14.622974 kubelet[3294]: I0421 12:04:14.622933 3294 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61238341-a297-4aa5-a969-871e84b67daf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "61238341-a297-4aa5-a969-871e84b67daf" (UID: "61238341-a297-4aa5-a969-871e84b67daf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 12:04:14.625320 systemd[1]: var-lib-kubelet-pods-61238341\x2da297\x2d4aa5\x2da969\x2d871e84b67daf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 12:04:14.713665 kubelet[3294]: I0421 12:04:14.713460 3294 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-whisker-ca-bundle\") on node \"ci-4081.3.7-a-a89817d5a7\" DevicePath \"\"" Apr 21 12:04:14.713665 kubelet[3294]: I0421 12:04:14.713561 3294 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/61238341-a297-4aa5-a969-871e84b67daf-nginx-config\") on node \"ci-4081.3.7-a-a89817d5a7\" DevicePath \"\"" Apr 21 12:04:14.713665 kubelet[3294]: I0421 12:04:14.713593 3294 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61238341-a297-4aa5-a969-871e84b67daf-whisker-backend-key-pair\") on node \"ci-4081.3.7-a-a89817d5a7\" DevicePath \"\"" Apr 21 12:04:14.713665 kubelet[3294]: I0421 12:04:14.713617 3294 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvsfd\" (UniqueName: \"kubernetes.io/projected/61238341-a297-4aa5-a969-871e84b67daf-kube-api-access-fvsfd\") on node \"ci-4081.3.7-a-a89817d5a7\" DevicePath \"\"" Apr 21 12:04:15.291585 systemd[1]: Removed slice kubepods-besteffort-pod61238341_a297_4aa5_a969_871e84b67daf.slice - libcontainer container kubepods-besteffort-pod61238341_a297_4aa5_a969_871e84b67daf.slice. Apr 21 12:04:15.411031 systemd[1]: Created slice kubepods-besteffort-pod21c566ea_f455_4582_ad55_b62bd197cba6.slice - libcontainer container kubepods-besteffort-pod21c566ea_f455_4582_ad55_b62bd197cba6.slice. Apr 21 12:04:15.428944 kernel: calico-node[4710]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 12:04:15.520045 kubelet[3294]: I0421 12:04:15.519999 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21c566ea-f455-4582-ad55-b62bd197cba6-whisker-backend-key-pair\") pod \"whisker-5ffdd858dc-mvxgr\" (UID: \"21c566ea-f455-4582-ad55-b62bd197cba6\") " pod="calico-system/whisker-5ffdd858dc-mvxgr" Apr 21 12:04:15.520045 kubelet[3294]: I0421 12:04:15.520051 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/21c566ea-f455-4582-ad55-b62bd197cba6-nginx-config\") pod \"whisker-5ffdd858dc-mvxgr\" (UID: \"21c566ea-f455-4582-ad55-b62bd197cba6\") " pod="calico-system/whisker-5ffdd858dc-mvxgr" Apr 21 12:04:15.520564 kubelet[3294]: I0421 12:04:15.520082 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21c566ea-f455-4582-ad55-b62bd197cba6-whisker-ca-bundle\") pod \"whisker-5ffdd858dc-mvxgr\" (UID: \"21c566ea-f455-4582-ad55-b62bd197cba6\") " pod="calico-system/whisker-5ffdd858dc-mvxgr" Apr 21 12:04:15.520564 kubelet[3294]: I0421 12:04:15.520100 3294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49x8g\" (UniqueName: \"kubernetes.io/projected/21c566ea-f455-4582-ad55-b62bd197cba6-kube-api-access-49x8g\") pod \"whisker-5ffdd858dc-mvxgr\" (UID: \"21c566ea-f455-4582-ad55-b62bd197cba6\") " pod="calico-system/whisker-5ffdd858dc-mvxgr" Apr 21 12:04:15.739264 containerd[1737]: time="2026-04-21T12:04:15.739215510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ffdd858dc-mvxgr,Uid:21c566ea-f455-4582-ad55-b62bd197cba6,Namespace:calico-system,Attempt:0,}" Apr 21 12:04:15.966641 systemd-networkd[1353]: calia492e4a8e3a: Link UP Apr 21 12:04:15.966922 systemd-networkd[1353]: calia492e4a8e3a: Gained carrier Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.827 [INFO][4767] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0 whisker-5ffdd858dc- calico-system 21c566ea-f455-4582-ad55-b62bd197cba6 969 0 2026-04-21 12:04:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5ffdd858dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 whisker-5ffdd858dc-mvxgr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia492e4a8e3a [] [] }} ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.827 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.862 [INFO][4779] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" HandleID="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.872 [INFO][4779] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" HandleID="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"whisker-5ffdd858dc-mvxgr", "timestamp":"2026-04-21 12:04:15.862261884 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f8420)} Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.872 [INFO][4779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.872 [INFO][4779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.872 [INFO][4779] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.875 [INFO][4779] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.879 [INFO][4779] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.883 [INFO][4779] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.885 [INFO][4779] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.887 [INFO][4779] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.887 [INFO][4779] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.889 [INFO][4779] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871 Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.894 [INFO][4779] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.905 [INFO][4779] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.1/26] block=192.168.37.0/26 handle="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.905 [INFO][4779] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.1/26] handle="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.905 [INFO][4779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:15.999724 containerd[1737]: 2026-04-21 12:04:15.905 [INFO][4779] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.1/26] IPv6=[] ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" HandleID="k8s-pod-network.0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" Apr 21 12:04:16.005109 containerd[1737]: 2026-04-21 12:04:15.907 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0", GenerateName:"whisker-5ffdd858dc-", Namespace:"calico-system", SelfLink:"", UID:"21c566ea-f455-4582-ad55-b62bd197cba6", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ffdd858dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"whisker-5ffdd858dc-mvxgr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia492e4a8e3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:16.005109 containerd[1737]: 2026-04-21 12:04:15.908 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.1/32] ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" Apr 21 12:04:16.005109 containerd[1737]: 2026-04-21 12:04:15.908 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia492e4a8e3a ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" Apr 21 12:04:16.005109 containerd[1737]: 2026-04-21 12:04:15.969 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" Apr 21 12:04:16.005109 containerd[1737]: 2026-04-21 12:04:15.971 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0", GenerateName:"whisker-5ffdd858dc-", Namespace:"calico-system", SelfLink:"", UID:"21c566ea-f455-4582-ad55-b62bd197cba6", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5ffdd858dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871", Pod:"whisker-5ffdd858dc-mvxgr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia492e4a8e3a", MAC:"56:06:40:ab:4f:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:16.005109 containerd[1737]: 2026-04-21 12:04:15.994 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871" Namespace="calico-system" Pod="whisker-5ffdd858dc-mvxgr" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--5ffdd858dc--mvxgr-eth0" Apr 21 12:04:16.008662 kubelet[3294]: I0421 12:04:16.008612 3294 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61238341-a297-4aa5-a969-871e84b67daf" path="/var/lib/kubelet/pods/61238341-a297-4aa5-a969-871e84b67daf/volumes" Apr 21 12:04:16.035503 containerd[1737]: time="2026-04-21T12:04:16.035411703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:16.036431 containerd[1737]: time="2026-04-21T12:04:16.035482004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:16.036431 containerd[1737]: time="2026-04-21T12:04:16.036268118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:16.036431 containerd[1737]: time="2026-04-21T12:04:16.036385620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:16.074034 systemd[1]: Started cri-containerd-0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871.scope - libcontainer container 0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871. Apr 21 12:04:16.127450 containerd[1737]: time="2026-04-21T12:04:16.127405354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5ffdd858dc-mvxgr,Uid:21c566ea-f455-4582-ad55-b62bd197cba6,Namespace:calico-system,Attempt:0,} returns sandbox id \"0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871\"" Apr 21 12:04:16.129473 containerd[1737]: time="2026-04-21T12:04:16.129176584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 12:04:16.251815 systemd-networkd[1353]: vxlan.calico: Link UP Apr 21 12:04:16.251856 systemd-networkd[1353]: vxlan.calico: Gained carrier Apr 21 12:04:17.020149 systemd-networkd[1353]: calia492e4a8e3a: Gained IPv6LL Apr 21 12:04:17.647646 containerd[1737]: time="2026-04-21T12:04:17.647589281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:17.650686 containerd[1737]: time="2026-04-21T12:04:17.650619232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 12:04:17.655220 containerd[1737]: time="2026-04-21T12:04:17.655160209Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:17.662053 containerd[1737]: time="2026-04-21T12:04:17.662000924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:17.663257 containerd[1737]: time="2026-04-21T12:04:17.663211445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.533605953s" Apr 21 12:04:17.663352 containerd[1737]: time="2026-04-21T12:04:17.663261845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 12:04:17.674582 containerd[1737]: time="2026-04-21T12:04:17.674541036Z" level=info msg="CreateContainer within sandbox \"0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 12:04:17.713228 containerd[1737]: time="2026-04-21T12:04:17.713178887Z" level=info msg="CreateContainer within sandbox \"0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7bd9b539beb9263be603dff26c3b8c1ab2329c05dcaf5fd1dfdc24812a29e602\"" Apr 21 12:04:17.714156 containerd[1737]: time="2026-04-21T12:04:17.714117003Z" level=info msg="StartContainer for \"7bd9b539beb9263be603dff26c3b8c1ab2329c05dcaf5fd1dfdc24812a29e602\"" Apr 21 12:04:17.726563 systemd-networkd[1353]: vxlan.calico: Gained IPv6LL Apr 21 12:04:17.765044 systemd[1]: Started cri-containerd-7bd9b539beb9263be603dff26c3b8c1ab2329c05dcaf5fd1dfdc24812a29e602.scope - libcontainer container 7bd9b539beb9263be603dff26c3b8c1ab2329c05dcaf5fd1dfdc24812a29e602. Apr 21 12:04:17.813440 containerd[1737]: time="2026-04-21T12:04:17.813384276Z" level=info msg="StartContainer for \"7bd9b539beb9263be603dff26c3b8c1ab2329c05dcaf5fd1dfdc24812a29e602\" returns successfully" Apr 21 12:04:17.816198 containerd[1737]: time="2026-04-21T12:04:17.816160623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 12:04:19.669021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904743843.mount: Deactivated successfully. Apr 21 12:04:19.739456 containerd[1737]: time="2026-04-21T12:04:19.739399882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:19.743551 containerd[1737]: time="2026-04-21T12:04:19.743370347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 12:04:19.748045 containerd[1737]: time="2026-04-21T12:04:19.747266210Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:19.758415 containerd[1737]: time="2026-04-21T12:04:19.758368090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:19.759328 containerd[1737]: time="2026-04-21T12:04:19.759290205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.943081981s" Apr 21 12:04:19.759461 containerd[1737]: time="2026-04-21T12:04:19.759440908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 12:04:19.768568 containerd[1737]: time="2026-04-21T12:04:19.768536856Z" level=info msg="CreateContainer within sandbox \"0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 12:04:19.808946 containerd[1737]: time="2026-04-21T12:04:19.808898211Z" level=info msg="CreateContainer within sandbox \"0bf59860630bb2c0fd04d2fc84a5fca861445abf2286ddd5fa4c93c93df34871\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d7b0f87572fffe125897c3371172b916d39c01e9fd6a071977e92d9784bfe0b3\"" Apr 21 12:04:19.810398 containerd[1737]: time="2026-04-21T12:04:19.809643723Z" level=info msg="StartContainer for \"d7b0f87572fffe125897c3371172b916d39c01e9fd6a071977e92d9784bfe0b3\"" Apr 21 12:04:19.842994 systemd[1]: Started cri-containerd-d7b0f87572fffe125897c3371172b916d39c01e9fd6a071977e92d9784bfe0b3.scope - libcontainer container d7b0f87572fffe125897c3371172b916d39c01e9fd6a071977e92d9784bfe0b3. Apr 21 12:04:19.890929 containerd[1737]: time="2026-04-21T12:04:19.890798941Z" level=info msg="StartContainer for \"d7b0f87572fffe125897c3371172b916d39c01e9fd6a071977e92d9784bfe0b3\" returns successfully" Apr 21 12:04:23.992790 containerd[1737]: time="2026-04-21T12:04:23.992384459Z" level=info msg="StopPodSandbox for \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\"" Apr 21 12:04:23.993797 containerd[1737]: time="2026-04-21T12:04:23.993491677Z" level=info msg="StopPodSandbox for \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\"" Apr 21 12:04:24.062407 kubelet[3294]: I0421 12:04:24.062308 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5ffdd858dc-mvxgr" podStartSLOduration=5.430301942 podStartE2EDuration="9.061728886s" podCreationTimestamp="2026-04-21 12:04:15 +0000 UTC" firstStartedPulling="2026-04-21 12:04:16.128854878 +0000 UTC m=+72.253787464" lastFinishedPulling="2026-04-21 12:04:19.760281822 +0000 UTC m=+75.885214408" observedRunningTime="2026-04-21 12:04:20.311175669 +0000 UTC m=+76.436108255" watchObservedRunningTime="2026-04-21 12:04:24.061728886 +0000 UTC m=+80.186661472" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.074 [INFO][5052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.074 [INFO][5052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" iface="eth0" netns="/var/run/netns/cni-015b912e-1e21-3fdf-0825-9c916eb863f8" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.075 [INFO][5052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" iface="eth0" netns="/var/run/netns/cni-015b912e-1e21-3fdf-0825-9c916eb863f8" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.076 [INFO][5052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" iface="eth0" netns="/var/run/netns/cni-015b912e-1e21-3fdf-0825-9c916eb863f8" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.076 [INFO][5052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.076 [INFO][5052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.107 [INFO][5071] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.107 [INFO][5071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.107 [INFO][5071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.118 [WARNING][5071] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.118 [INFO][5071] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.119 [INFO][5071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:24.126225 containerd[1737]: 2026-04-21 12:04:24.123 [INFO][5052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:04:24.130561 containerd[1737]: time="2026-04-21T12:04:24.129947594Z" level=info msg="TearDown network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\" successfully" Apr 21 12:04:24.130561 containerd[1737]: time="2026-04-21T12:04:24.129998694Z" level=info msg="StopPodSandbox for \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\" returns successfully" Apr 21 12:04:24.131779 systemd[1]: run-netns-cni\x2d015b912e\x2d1e21\x2d3fdf\x2d0825\x2d9c916eb863f8.mount: Deactivated successfully. Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.061 [INFO][5053] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.063 [INFO][5053] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" iface="eth0" netns="/var/run/netns/cni-b23b7f3f-f477-3cbf-169c-c67074095925" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.064 [INFO][5053] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" iface="eth0" netns="/var/run/netns/cni-b23b7f3f-f477-3cbf-169c-c67074095925" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.064 [INFO][5053] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" iface="eth0" netns="/var/run/netns/cni-b23b7f3f-f477-3cbf-169c-c67074095925" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.064 [INFO][5053] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.064 [INFO][5053] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.114 [INFO][5066] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.114 [INFO][5066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.119 [INFO][5066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.130 [WARNING][5066] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.130 [INFO][5066] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.134 [INFO][5066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:24.138649 containerd[1737]: 2026-04-21 12:04:24.137 [INFO][5053] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:04:24.139261 containerd[1737]: time="2026-04-21T12:04:24.139041141Z" level=info msg="TearDown network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\" successfully" Apr 21 12:04:24.139261 containerd[1737]: time="2026-04-21T12:04:24.139070442Z" level=info msg="StopPodSandbox for \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\" returns successfully" Apr 21 12:04:24.139937 containerd[1737]: time="2026-04-21T12:04:24.139709852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d9fbb898-hf2cc,Uid:9752a3e6-7576-4285-9a83-6bc365a16d48,Namespace:calico-system,Attempt:1,}" Apr 21 12:04:24.143984 systemd[1]: run-netns-cni\x2db23b7f3f\x2df477\x2d3cbf\x2d169c\x2dc67074095925.mount: Deactivated successfully. Apr 21 12:04:24.148804 containerd[1737]: time="2026-04-21T12:04:24.148769899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqkhk,Uid:20e52a3b-9a02-45ad-96fe-fb214b6cbb05,Namespace:calico-system,Attempt:1,}" Apr 21 12:04:24.351981 systemd-networkd[1353]: calif7c1f4a4707: Link UP Apr 21 12:04:24.354453 systemd-networkd[1353]: calif7c1f4a4707: Gained carrier Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.270 [INFO][5085] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0 csi-node-driver- calico-system 20e52a3b-9a02-45ad-96fe-fb214b6cbb05 1007 0 2026-04-21 12:03:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 csi-node-driver-mqkhk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif7c1f4a4707 [] [] }} ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.270 [INFO][5085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.308 [INFO][5103] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" HandleID="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.317 [INFO][5103] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" HandleID="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"csi-node-driver-mqkhk", "timestamp":"2026-04-21 12:04:24.308219189 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002a8f20)} Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.317 [INFO][5103] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.317 [INFO][5103] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.317 [INFO][5103] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.319 [INFO][5103] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.324 [INFO][5103] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.329 [INFO][5103] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.331 [INFO][5103] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.332 [INFO][5103] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.332 [INFO][5103] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.334 [INFO][5103] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.338 [INFO][5103] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.344 [INFO][5103] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.2/26] block=192.168.37.0/26 handle="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.344 [INFO][5103] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.2/26] handle="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.344 [INFO][5103] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:24.382227 containerd[1737]: 2026-04-21 12:04:24.344 [INFO][5103] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.2/26] IPv6=[] ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" HandleID="k8s-pod-network.4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.384239 containerd[1737]: 2026-04-21 12:04:24.346 [INFO][5085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20e52a3b-9a02-45ad-96fe-fb214b6cbb05", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"csi-node-driver-mqkhk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7c1f4a4707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:24.384239 containerd[1737]: 2026-04-21 12:04:24.346 [INFO][5085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.2/32] ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.384239 containerd[1737]: 2026-04-21 12:04:24.346 [INFO][5085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7c1f4a4707 ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.384239 containerd[1737]: 2026-04-21 12:04:24.353 [INFO][5085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.384239 containerd[1737]: 2026-04-21 12:04:24.355 [INFO][5085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20e52a3b-9a02-45ad-96fe-fb214b6cbb05", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c", Pod:"csi-node-driver-mqkhk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7c1f4a4707", MAC:"ea:e7:b4:83:88:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:24.384239 containerd[1737]: 2026-04-21 12:04:24.379 [INFO][5085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c" Namespace="calico-system" Pod="csi-node-driver-mqkhk" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:04:24.416254 containerd[1737]: time="2026-04-21T12:04:24.415949039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:24.416254 containerd[1737]: time="2026-04-21T12:04:24.416015040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:24.416254 containerd[1737]: time="2026-04-21T12:04:24.416030640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:24.416254 containerd[1737]: time="2026-04-21T12:04:24.416116441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:24.449447 systemd[1]: Started cri-containerd-4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c.scope - libcontainer container 4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c. Apr 21 12:04:24.482373 systemd-networkd[1353]: cali94b672693f0: Link UP Apr 21 12:04:24.482644 systemd-networkd[1353]: cali94b672693f0: Gained carrier Apr 21 12:04:24.514363 containerd[1737]: time="2026-04-21T12:04:24.513939530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqkhk,Uid:20e52a3b-9a02-45ad-96fe-fb214b6cbb05,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c\"" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.268 [INFO][5080] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0 calico-kube-controllers-76d9fbb898- calico-system 9752a3e6-7576-4285-9a83-6bc365a16d48 1008 0 2026-04-21 12:03:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76d9fbb898 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 calico-kube-controllers-76d9fbb898-hf2cc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali94b672693f0 [] [] }} ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.269 [INFO][5080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.311 [INFO][5108] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" HandleID="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.322 [INFO][5108] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" HandleID="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"calico-kube-controllers-76d9fbb898-hf2cc", "timestamp":"2026-04-21 12:04:24.311278339 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f62c0)} Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.322 [INFO][5108] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.344 [INFO][5108] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.344 [INFO][5108] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.426 [INFO][5108] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.436 [INFO][5108] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.443 [INFO][5108] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.445 [INFO][5108] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.448 [INFO][5108] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.448 [INFO][5108] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.450 [INFO][5108] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4 Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.460 [INFO][5108] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.468 [INFO][5108] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.3/26] block=192.168.37.0/26 handle="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.469 [INFO][5108] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.3/26] handle="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.469 [INFO][5108] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:24.514976 containerd[1737]: 2026-04-21 12:04:24.469 [INFO][5108] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.3/26] IPv6=[] ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" HandleID="k8s-pod-network.7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.517552 containerd[1737]: 2026-04-21 12:04:24.474 [INFO][5080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0", GenerateName:"calico-kube-controllers-76d9fbb898-", Namespace:"calico-system", SelfLink:"", UID:"9752a3e6-7576-4285-9a83-6bc365a16d48", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d9fbb898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"calico-kube-controllers-76d9fbb898-hf2cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94b672693f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:24.517552 containerd[1737]: 2026-04-21 12:04:24.474 [INFO][5080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.3/32] ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.517552 containerd[1737]: 2026-04-21 12:04:24.474 [INFO][5080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94b672693f0 ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.517552 containerd[1737]: 2026-04-21 12:04:24.483 [INFO][5080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.517552 containerd[1737]: 2026-04-21 12:04:24.486 [INFO][5080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0", GenerateName:"calico-kube-controllers-76d9fbb898-", Namespace:"calico-system", SelfLink:"", UID:"9752a3e6-7576-4285-9a83-6bc365a16d48", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d9fbb898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4", Pod:"calico-kube-controllers-76d9fbb898-hf2cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94b672693f0", MAC:"ca:a1:07:c2:d0:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:24.517552 containerd[1737]: 2026-04-21 12:04:24.504 [INFO][5080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4" Namespace="calico-system" Pod="calico-kube-controllers-76d9fbb898-hf2cc" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:04:24.519593 containerd[1737]: time="2026-04-21T12:04:24.519143915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 12:04:24.558864 containerd[1737]: time="2026-04-21T12:04:24.558562455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:24.558864 containerd[1737]: time="2026-04-21T12:04:24.558624656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:24.558864 containerd[1737]: time="2026-04-21T12:04:24.558645756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:24.558864 containerd[1737]: time="2026-04-21T12:04:24.558738858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:24.582019 systemd[1]: Started cri-containerd-7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4.scope - libcontainer container 7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4. Apr 21 12:04:24.639123 containerd[1737]: time="2026-04-21T12:04:24.639008962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76d9fbb898-hf2cc,Uid:9752a3e6-7576-4285-9a83-6bc365a16d48,Namespace:calico-system,Attempt:1,} returns sandbox id \"7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4\"" Apr 21 12:04:24.991611 containerd[1737]: time="2026-04-21T12:04:24.991265183Z" level=info msg="StopPodSandbox for \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\"" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.042 [INFO][5245] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.042 [INFO][5245] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" iface="eth0" netns="/var/run/netns/cni-f70b9640-e4cd-d324-b299-16e6577e4e12" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.043 [INFO][5245] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" iface="eth0" netns="/var/run/netns/cni-f70b9640-e4cd-d324-b299-16e6577e4e12" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.044 [INFO][5245] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" iface="eth0" netns="/var/run/netns/cni-f70b9640-e4cd-d324-b299-16e6577e4e12" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.044 [INFO][5245] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.044 [INFO][5245] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.075 [INFO][5252] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.076 [INFO][5252] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.076 [INFO][5252] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.082 [WARNING][5252] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.082 [INFO][5252] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.083 [INFO][5252] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:25.086345 containerd[1737]: 2026-04-21 12:04:25.085 [INFO][5245] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:04:25.088253 containerd[1737]: time="2026-04-21T12:04:25.086461429Z" level=info msg="TearDown network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\" successfully" Apr 21 12:04:25.088253 containerd[1737]: time="2026-04-21T12:04:25.086497530Z" level=info msg="StopPodSandbox for \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\" returns successfully" Apr 21 12:04:25.097322 containerd[1737]: time="2026-04-21T12:04:25.097280605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-tjxlz,Uid:22bf69eb-1b19-496c-9e8c-76911e03643c,Namespace:calico-system,Attempt:1,}" Apr 21 12:04:25.139505 systemd[1]: run-netns-cni\x2df70b9640\x2de4cd\x2dd324\x2db299\x2d16e6577e4e12.mount: Deactivated successfully. Apr 21 12:04:25.265809 systemd-networkd[1353]: cali22de7d230cd: Link UP Apr 21 12:04:25.267814 systemd-networkd[1353]: cali22de7d230cd: Gained carrier Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.195 [INFO][5258] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0 calico-apiserver-7c79d9c885- calico-system 22bf69eb-1b19-496c-9e8c-76911e03643c 1022 0 2026-04-21 12:03:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c79d9c885 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 calico-apiserver-7c79d9c885-tjxlz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali22de7d230cd [] [] }} ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.196 [INFO][5258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.220 [INFO][5271] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" HandleID="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.228 [INFO][5271] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" HandleID="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"calico-apiserver-7c79d9c885-tjxlz", "timestamp":"2026-04-21 12:04:25.220514606 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ad600)} Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.228 [INFO][5271] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.228 [INFO][5271] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.228 [INFO][5271] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.230 [INFO][5271] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.233 [INFO][5271] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.237 [INFO][5271] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.239 [INFO][5271] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.240 [INFO][5271] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.241 [INFO][5271] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.242 [INFO][5271] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7 Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.250 [INFO][5271] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.259 [INFO][5271] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.4/26] block=192.168.37.0/26 handle="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.259 [INFO][5271] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.4/26] handle="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.259 [INFO][5271] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:25.287666 containerd[1737]: 2026-04-21 12:04:25.259 [INFO][5271] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.4/26] IPv6=[] ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" HandleID="k8s-pod-network.591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.288561 containerd[1737]: 2026-04-21 12:04:25.261 [INFO][5258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"22bf69eb-1b19-496c-9e8c-76911e03643c", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"calico-apiserver-7c79d9c885-tjxlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali22de7d230cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:25.288561 containerd[1737]: 2026-04-21 12:04:25.261 [INFO][5258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.4/32] ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.288561 containerd[1737]: 2026-04-21 12:04:25.261 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22de7d230cd ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.288561 containerd[1737]: 2026-04-21 12:04:25.264 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.288561 containerd[1737]: 2026-04-21 12:04:25.264 [INFO][5258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"22bf69eb-1b19-496c-9e8c-76911e03643c", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7", Pod:"calico-apiserver-7c79d9c885-tjxlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali22de7d230cd", MAC:"1a:a8:97:e9:0b:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:25.288561 containerd[1737]: 2026-04-21 12:04:25.283 [INFO][5258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-tjxlz" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:04:25.339350 containerd[1737]: time="2026-04-21T12:04:25.339056832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:25.339350 containerd[1737]: time="2026-04-21T12:04:25.339120333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:25.339350 containerd[1737]: time="2026-04-21T12:04:25.339161734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:25.339350 containerd[1737]: time="2026-04-21T12:04:25.339270035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:25.384033 systemd[1]: Started cri-containerd-591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7.scope - libcontainer container 591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7. Apr 21 12:04:25.431871 containerd[1737]: time="2026-04-21T12:04:25.431798038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-tjxlz,Uid:22bf69eb-1b19-496c-9e8c-76911e03643c,Namespace:calico-system,Attempt:1,} returns sandbox id \"591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7\"" Apr 21 12:04:25.724994 systemd-networkd[1353]: calif7c1f4a4707: Gained IPv6LL Apr 21 12:04:25.787990 systemd-networkd[1353]: cali94b672693f0: Gained IPv6LL Apr 21 12:04:25.992849 containerd[1737]: time="2026-04-21T12:04:25.992336342Z" level=info msg="StopPodSandbox for \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\"" Apr 21 12:04:26.009535 containerd[1737]: time="2026-04-21T12:04:26.009471321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 12:04:26.010456 containerd[1737]: time="2026-04-21T12:04:26.010429936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:26.012663 containerd[1737]: time="2026-04-21T12:04:26.012000162Z" level=info msg="StopPodSandbox for \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\"" Apr 21 12:04:26.012663 containerd[1737]: time="2026-04-21T12:04:26.012210065Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:26.018387 containerd[1737]: time="2026-04-21T12:04:26.018350765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:26.019770 containerd[1737]: time="2026-04-21T12:04:26.019650086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.50045117s" Apr 21 12:04:26.019770 containerd[1737]: time="2026-04-21T12:04:26.019692087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 12:04:26.022846 containerd[1737]: time="2026-04-21T12:04:26.021134110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 12:04:26.034966 containerd[1737]: time="2026-04-21T12:04:26.034930434Z" level=info msg="CreateContainer within sandbox \"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 12:04:26.101764 containerd[1737]: time="2026-04-21T12:04:26.101552816Z" level=info msg="CreateContainer within sandbox \"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"56d5743855a0c4c1340b213434d32e64179cc7f4faffa2a1887116040d33ad6d\"" Apr 21 12:04:26.103315 containerd[1737]: time="2026-04-21T12:04:26.103279344Z" level=info msg="StartContainer for \"56d5743855a0c4c1340b213434d32e64179cc7f4faffa2a1887116040d33ad6d\"" Apr 21 12:04:26.133293 systemd[1]: run-containerd-runc-k8s.io-591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7-runc.DKsJg1.mount: Deactivated successfully. Apr 21 12:04:26.188588 systemd[1]: Started cri-containerd-56d5743855a0c4c1340b213434d32e64179cc7f4faffa2a1887116040d33ad6d.scope - libcontainer container 56d5743855a0c4c1340b213434d32e64179cc7f4faffa2a1887116040d33ad6d. Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.107 [INFO][5371] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.107 [INFO][5371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" iface="eth0" netns="/var/run/netns/cni-a785fb2c-109b-99d9-a424-65b27731f20c" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.109 [INFO][5371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" iface="eth0" netns="/var/run/netns/cni-a785fb2c-109b-99d9-a424-65b27731f20c" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.110 [INFO][5371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" iface="eth0" netns="/var/run/netns/cni-a785fb2c-109b-99d9-a424-65b27731f20c" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.110 [INFO][5371] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.111 [INFO][5371] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.167 [INFO][5386] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.167 [INFO][5386] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.168 [INFO][5386] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.182 [WARNING][5386] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.182 [INFO][5386] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.185 [INFO][5386] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:26.192300 containerd[1737]: 2026-04-21 12:04:26.190 [INFO][5371] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:04:26.194850 containerd[1737]: time="2026-04-21T12:04:26.192915500Z" level=info msg="TearDown network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\" successfully" Apr 21 12:04:26.194850 containerd[1737]: time="2026-04-21T12:04:26.192975401Z" level=info msg="StopPodSandbox for \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\" returns successfully" Apr 21 12:04:26.198927 systemd[1]: run-netns-cni\x2da785fb2c\x2d109b\x2d99d9\x2da424\x2d65b27731f20c.mount: Deactivated successfully. Apr 21 12:04:26.203712 containerd[1737]: time="2026-04-21T12:04:26.203678075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g54k5,Uid:e240fd29-afe5-4e92-98bd-6ce65bc08a12,Namespace:kube-system,Attempt:1,}" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.106 [INFO][5370] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.106 [INFO][5370] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" iface="eth0" netns="/var/run/netns/cni-ff9d9bbd-2896-f9ac-06e5-f4d9929dd2eb" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.106 [INFO][5370] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" iface="eth0" netns="/var/run/netns/cni-ff9d9bbd-2896-f9ac-06e5-f4d9929dd2eb" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.106 [INFO][5370] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" iface="eth0" netns="/var/run/netns/cni-ff9d9bbd-2896-f9ac-06e5-f4d9929dd2eb" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.106 [INFO][5370] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.106 [INFO][5370] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.191 [INFO][5384] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.191 [INFO][5384] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.191 [INFO][5384] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.206 [WARNING][5384] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.206 [INFO][5384] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.208 [INFO][5384] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:26.211812 containerd[1737]: 2026-04-21 12:04:26.209 [INFO][5370] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:04:26.214083 containerd[1737]: time="2026-04-21T12:04:26.212797523Z" level=info msg="TearDown network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\" successfully" Apr 21 12:04:26.214083 containerd[1737]: time="2026-04-21T12:04:26.212872524Z" level=info msg="StopPodSandbox for \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\" returns successfully" Apr 21 12:04:26.217610 systemd[1]: run-netns-cni\x2dff9d9bbd\x2d2896\x2df9ac\x2d06e5\x2df4d9929dd2eb.mount: Deactivated successfully. Apr 21 12:04:26.222151 containerd[1737]: time="2026-04-21T12:04:26.222049773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-h4f6q,Uid:4c031d9c-a941-4ee9-ab73-567a7398ad1c,Namespace:calico-system,Attempt:1,}" Apr 21 12:04:26.257986 containerd[1737]: time="2026-04-21T12:04:26.256879539Z" level=info msg="StartContainer for \"56d5743855a0c4c1340b213434d32e64179cc7f4faffa2a1887116040d33ad6d\" returns successfully" Apr 21 12:04:26.419520 systemd-networkd[1353]: cali22f6e0c37cd: Link UP Apr 21 12:04:26.421045 systemd-networkd[1353]: cali22f6e0c37cd: Gained carrier Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.322 [INFO][5433] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0 coredns-66bc5c9577- kube-system e240fd29-afe5-4e92-98bd-6ce65bc08a12 1035 0 2026-04-21 12:03:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 coredns-66bc5c9577-g54k5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22f6e0c37cd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.323 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.360 [INFO][5454] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" HandleID="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.370 [INFO][5454] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" HandleID="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"coredns-66bc5c9577-g54k5", "timestamp":"2026-04-21 12:04:26.360208717 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00033b600)} Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.370 [INFO][5454] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.370 [INFO][5454] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.370 [INFO][5454] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.373 [INFO][5454] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.379 [INFO][5454] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.385 [INFO][5454] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.388 [INFO][5454] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.391 [INFO][5454] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.391 [INFO][5454] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.393 [INFO][5454] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43 Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.402 [INFO][5454] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.409 [INFO][5454] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.5/26] block=192.168.37.0/26 handle="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.409 [INFO][5454] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.5/26] handle="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.409 [INFO][5454] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:26.446157 containerd[1737]: 2026-04-21 12:04:26.409 [INFO][5454] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.5/26] IPv6=[] ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" HandleID="k8s-pod-network.4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.448993 containerd[1737]: 2026-04-21 12:04:26.414 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e240fd29-afe5-4e92-98bd-6ce65bc08a12", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"coredns-66bc5c9577-g54k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22f6e0c37cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:26.448993 containerd[1737]: 2026-04-21 12:04:26.414 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.5/32] ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.448993 containerd[1737]: 2026-04-21 12:04:26.414 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22f6e0c37cd ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.448993 containerd[1737]: 2026-04-21 12:04:26.422 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.448993 containerd[1737]: 2026-04-21 12:04:26.423 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e240fd29-afe5-4e92-98bd-6ce65bc08a12", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43", Pod:"coredns-66bc5c9577-g54k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22f6e0c37cd", MAC:"12:c6:be:69:50:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:26.449365 containerd[1737]: 2026-04-21 12:04:26.443 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43" Namespace="kube-system" Pod="coredns-66bc5c9577-g54k5" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:04:26.501419 containerd[1737]: time="2026-04-21T12:04:26.496772935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:26.501419 containerd[1737]: time="2026-04-21T12:04:26.496898237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:26.501419 containerd[1737]: time="2026-04-21T12:04:26.496917838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:26.501419 containerd[1737]: time="2026-04-21T12:04:26.497090541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:26.529040 systemd[1]: Started cri-containerd-4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43.scope - libcontainer container 4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43. Apr 21 12:04:26.545754 systemd-networkd[1353]: cali35bf80a7a8a: Link UP Apr 21 12:04:26.548106 systemd-networkd[1353]: cali35bf80a7a8a: Gained carrier Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.373 [INFO][5445] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0 calico-apiserver-7c79d9c885- calico-system 4c031d9c-a941-4ee9-ab73-567a7398ad1c 1034 0 2026-04-21 12:03:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c79d9c885 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 calico-apiserver-7c79d9c885-h4f6q eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali35bf80a7a8a [] [] }} ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.373 [INFO][5445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.413 [INFO][5465] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" HandleID="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.427 [INFO][5465] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" HandleID="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036f870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"calico-apiserver-7c79d9c885-h4f6q", "timestamp":"2026-04-21 12:04:26.413638385 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b6f20)} Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.427 [INFO][5465] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.428 [INFO][5465] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.428 [INFO][5465] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.475 [INFO][5465] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.485 [INFO][5465] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.494 [INFO][5465] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.499 [INFO][5465] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.504 [INFO][5465] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.504 [INFO][5465] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.505 [INFO][5465] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.511 [INFO][5465] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.526 [INFO][5465] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.6/26] block=192.168.37.0/26 handle="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.526 [INFO][5465] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.6/26] handle="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.527 [INFO][5465] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:26.582053 containerd[1737]: 2026-04-21 12:04:26.527 [INFO][5465] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.6/26] IPv6=[] ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" HandleID="k8s-pod-network.1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.583108 containerd[1737]: 2026-04-21 12:04:26.536 [INFO][5445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"4c031d9c-a941-4ee9-ab73-567a7398ad1c", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"calico-apiserver-7c79d9c885-h4f6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali35bf80a7a8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:26.583108 containerd[1737]: 2026-04-21 12:04:26.537 [INFO][5445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.6/32] ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.583108 containerd[1737]: 2026-04-21 12:04:26.538 [INFO][5445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35bf80a7a8a ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.583108 containerd[1737]: 2026-04-21 12:04:26.548 [INFO][5445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.583108 containerd[1737]: 2026-04-21 12:04:26.548 [INFO][5445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"4c031d9c-a941-4ee9-ab73-567a7398ad1c", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae", Pod:"calico-apiserver-7c79d9c885-h4f6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali35bf80a7a8a", MAC:"7e:6f:66:b7:45:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:26.583108 containerd[1737]: 2026-04-21 12:04:26.578 [INFO][5445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae" Namespace="calico-system" Pod="calico-apiserver-7c79d9c885-h4f6q" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:04:26.628050 containerd[1737]: time="2026-04-21T12:04:26.627731462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g54k5,Uid:e240fd29-afe5-4e92-98bd-6ce65bc08a12,Namespace:kube-system,Attempt:1,} returns sandbox id \"4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43\"" Apr 21 12:04:26.635536 containerd[1737]: time="2026-04-21T12:04:26.635438988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:26.636544 containerd[1737]: time="2026-04-21T12:04:26.636306102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:26.636544 containerd[1737]: time="2026-04-21T12:04:26.636332402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:26.636544 containerd[1737]: time="2026-04-21T12:04:26.636418004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:26.644869 containerd[1737]: time="2026-04-21T12:04:26.641627488Z" level=info msg="CreateContainer within sandbox \"4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 12:04:26.669327 systemd[1]: Started cri-containerd-1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae.scope - libcontainer container 1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae. Apr 21 12:04:26.705490 containerd[1737]: time="2026-04-21T12:04:26.705445025Z" level=info msg="CreateContainer within sandbox \"4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cbb6db196c034e554206606a9022b5bd459f7f1ea9cb412a8d7236d4e318dcbd\"" Apr 21 12:04:26.706930 containerd[1737]: time="2026-04-21T12:04:26.706725345Z" level=info msg="StartContainer for \"cbb6db196c034e554206606a9022b5bd459f7f1ea9cb412a8d7236d4e318dcbd\"" Apr 21 12:04:26.775250 systemd[1]: Started cri-containerd-cbb6db196c034e554206606a9022b5bd459f7f1ea9cb412a8d7236d4e318dcbd.scope - libcontainer container cbb6db196c034e554206606a9022b5bd459f7f1ea9cb412a8d7236d4e318dcbd. Apr 21 12:04:26.783928 containerd[1737]: time="2026-04-21T12:04:26.782856382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c79d9c885-h4f6q,Uid:4c031d9c-a941-4ee9-ab73-567a7398ad1c,Namespace:calico-system,Attempt:1,} returns sandbox id \"1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae\"" Apr 21 12:04:26.817942 containerd[1737]: time="2026-04-21T12:04:26.817791349Z" level=info msg="StartContainer for \"cbb6db196c034e554206606a9022b5bd459f7f1ea9cb412a8d7236d4e318dcbd\" returns successfully" Apr 21 12:04:27.004108 systemd-networkd[1353]: cali22de7d230cd: Gained IPv6LL Apr 21 12:04:27.348848 kubelet[3294]: I0421 12:04:27.346836 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g54k5" podStartSLOduration=78.346803342 podStartE2EDuration="1m18.346803342s" podCreationTimestamp="2026-04-21 12:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:04:27.346522437 +0000 UTC m=+83.471454923" watchObservedRunningTime="2026-04-21 12:04:27.346803342 +0000 UTC m=+83.471735828" Apr 21 12:04:27.708949 systemd-networkd[1353]: cali35bf80a7a8a: Gained IPv6LL Apr 21 12:04:27.992445 containerd[1737]: time="2026-04-21T12:04:27.991263181Z" level=info msg="StopPodSandbox for \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\"" Apr 21 12:04:28.029561 systemd-networkd[1353]: cali22f6e0c37cd: Gained IPv6LL Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.055 [INFO][5654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.055 [INFO][5654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" iface="eth0" netns="/var/run/netns/cni-49a97046-dd64-1738-9b6f-2b1511a1ace7" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.056 [INFO][5654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" iface="eth0" netns="/var/run/netns/cni-49a97046-dd64-1738-9b6f-2b1511a1ace7" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.056 [INFO][5654] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" iface="eth0" netns="/var/run/netns/cni-49a97046-dd64-1738-9b6f-2b1511a1ace7" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.056 [INFO][5654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.056 [INFO][5654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.080 [INFO][5665] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.080 [INFO][5665] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.080 [INFO][5665] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.087 [WARNING][5665] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.087 [INFO][5665] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.088 [INFO][5665] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:28.092515 containerd[1737]: 2026-04-21 12:04:28.090 [INFO][5654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:04:28.095121 containerd[1737]: time="2026-04-21T12:04:28.094913587Z" level=info msg="TearDown network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\" successfully" Apr 21 12:04:28.095121 containerd[1737]: time="2026-04-21T12:04:28.094992188Z" level=info msg="StopPodSandbox for \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\" returns successfully" Apr 21 12:04:28.098285 systemd[1]: run-netns-cni\x2d49a97046\x2ddd64\x2d1738\x2d9b6f\x2d2b1511a1ace7.mount: Deactivated successfully. Apr 21 12:04:28.422712 containerd[1737]: time="2026-04-21T12:04:28.422597878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bzdfb,Uid:8f4b8722-a01d-4ba7-afd8-a88d111a2e76,Namespace:calico-system,Attempt:1,}" Apr 21 12:04:28.661412 systemd-networkd[1353]: calid827d556f7c: Link UP Apr 21 12:04:28.664477 systemd-networkd[1353]: calid827d556f7c: Gained carrier Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.553 [INFO][5678] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0 goldmane-cccfbd5cf- calico-system 8f4b8722-a01d-4ba7-afd8-a88d111a2e76 1060 0 2026-04-21 12:03:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 goldmane-cccfbd5cf-bzdfb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid827d556f7c [] [] }} ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.553 [INFO][5678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.599 [INFO][5691] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" HandleID="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.608 [INFO][5691] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" HandleID="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"goldmane-cccfbd5cf-bzdfb", "timestamp":"2026-04-21 12:04:28.599780193 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.608 [INFO][5691] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.608 [INFO][5691] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.609 [INFO][5691] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.612 [INFO][5691] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.618 [INFO][5691] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.625 [INFO][5691] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.627 [INFO][5691] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.630 [INFO][5691] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.630 [INFO][5691] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.632 [INFO][5691] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3 Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.639 [INFO][5691] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.652 [INFO][5691] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.7/26] block=192.168.37.0/26 handle="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.652 [INFO][5691] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.7/26] handle="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.652 [INFO][5691] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:28.699476 containerd[1737]: 2026-04-21 12:04:28.652 [INFO][5691] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.7/26] IPv6=[] ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" HandleID="k8s-pod-network.43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.700427 containerd[1737]: 2026-04-21 12:04:28.655 [INFO][5678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8f4b8722-a01d-4ba7-afd8-a88d111a2e76", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"goldmane-cccfbd5cf-bzdfb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid827d556f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:28.700427 containerd[1737]: 2026-04-21 12:04:28.655 [INFO][5678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.7/32] ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.700427 containerd[1737]: 2026-04-21 12:04:28.655 [INFO][5678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid827d556f7c ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.700427 containerd[1737]: 2026-04-21 12:04:28.667 [INFO][5678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.700427 containerd[1737]: 2026-04-21 12:04:28.668 [INFO][5678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8f4b8722-a01d-4ba7-afd8-a88d111a2e76", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3", Pod:"goldmane-cccfbd5cf-bzdfb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid827d556f7c", MAC:"3a:22:d7:0b:c9:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:28.700427 containerd[1737]: 2026-04-21 12:04:28.692 [INFO][5678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3" Namespace="calico-system" Pod="goldmane-cccfbd5cf-bzdfb" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:04:28.762162 containerd[1737]: time="2026-04-21T12:04:28.761051747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:28.762162 containerd[1737]: time="2026-04-21T12:04:28.761125548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:28.762162 containerd[1737]: time="2026-04-21T12:04:28.761162648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:28.762162 containerd[1737]: time="2026-04-21T12:04:28.761415853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:28.806166 systemd[1]: Started cri-containerd-43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3.scope - libcontainer container 43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3. Apr 21 12:04:28.895794 containerd[1737]: time="2026-04-21T12:04:28.895718362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-bzdfb,Uid:8f4b8722-a01d-4ba7-afd8-a88d111a2e76,Namespace:calico-system,Attempt:1,} returns sandbox id \"43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3\"" Apr 21 12:04:28.990968 containerd[1737]: time="2026-04-21T12:04:28.990896428Z" level=info msg="StopPodSandbox for \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\"" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.075 [INFO][5766] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.075 [INFO][5766] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" iface="eth0" netns="/var/run/netns/cni-1d5a112d-bcdf-e28b-10a0-01d3efd56ec0" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.075 [INFO][5766] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" iface="eth0" netns="/var/run/netns/cni-1d5a112d-bcdf-e28b-10a0-01d3efd56ec0" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.076 [INFO][5766] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" iface="eth0" netns="/var/run/netns/cni-1d5a112d-bcdf-e28b-10a0-01d3efd56ec0" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.076 [INFO][5766] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.076 [INFO][5766] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.128 [INFO][5774] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.129 [INFO][5774] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.129 [INFO][5774] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.139 [WARNING][5774] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.140 [INFO][5774] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.142 [INFO][5774] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:29.147330 containerd[1737]: 2026-04-21 12:04:29.144 [INFO][5766] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:04:29.153999 containerd[1737]: time="2026-04-21T12:04:29.151193366Z" level=info msg="TearDown network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\" successfully" Apr 21 12:04:29.153999 containerd[1737]: time="2026-04-21T12:04:29.151230266Z" level=info msg="StopPodSandbox for \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\" returns successfully" Apr 21 12:04:29.155234 systemd[1]: run-netns-cni\x2d1d5a112d\x2dbcdf\x2de28b\x2d10a0\x2d01d3efd56ec0.mount: Deactivated successfully. Apr 21 12:04:29.161001 containerd[1737]: time="2026-04-21T12:04:29.160964626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bsq7,Uid:4894f806-fc13-40ed-9637-207c1d2b6fe3,Namespace:kube-system,Attempt:1,}" Apr 21 12:04:29.381684 systemd-networkd[1353]: calic91903a191e: Link UP Apr 21 12:04:29.382100 systemd-networkd[1353]: calic91903a191e: Gained carrier Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.272 [INFO][5781] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0 coredns-66bc5c9577- kube-system 4894f806-fc13-40ed-9637-207c1d2b6fe3 1067 0 2026-04-21 12:03:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-a89817d5a7 coredns-66bc5c9577-4bsq7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic91903a191e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.272 [INFO][5781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.316 [INFO][5793] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" HandleID="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.325 [INFO][5793] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" HandleID="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003657b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-a89817d5a7", "pod":"coredns-66bc5c9577-4bsq7", "timestamp":"2026-04-21 12:04:29.316292882 +0000 UTC"}, Hostname:"ci-4081.3.7-a-a89817d5a7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000264f20)} Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.326 [INFO][5793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.326 [INFO][5793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.326 [INFO][5793] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-a89817d5a7' Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.328 [INFO][5793] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.333 [INFO][5793] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.339 [INFO][5793] ipam/ipam.go 526: Trying affinity for 192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.342 [INFO][5793] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.344 [INFO][5793] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.344 [INFO][5793] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.346 [INFO][5793] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89 Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.358 [INFO][5793] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.372 [INFO][5793] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.8/26] block=192.168.37.0/26 handle="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.372 [INFO][5793] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.8/26] handle="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" host="ci-4081.3.7-a-a89817d5a7" Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.372 [INFO][5793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:04:29.415375 containerd[1737]: 2026-04-21 12:04:29.372 [INFO][5793] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.8/26] IPv6=[] ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" HandleID="k8s-pod-network.60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.416346 containerd[1737]: 2026-04-21 12:04:29.376 [INFO][5781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4894f806-fc13-40ed-9637-207c1d2b6fe3", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"", Pod:"coredns-66bc5c9577-4bsq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91903a191e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:29.416346 containerd[1737]: 2026-04-21 12:04:29.376 [INFO][5781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.8/32] ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.416346 containerd[1737]: 2026-04-21 12:04:29.376 [INFO][5781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic91903a191e ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.416346 containerd[1737]: 2026-04-21 12:04:29.386 [INFO][5781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.416346 containerd[1737]: 2026-04-21 12:04:29.387 [INFO][5781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4894f806-fc13-40ed-9637-207c1d2b6fe3", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89", Pod:"coredns-66bc5c9577-4bsq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91903a191e", MAC:"12:71:79:19:9d:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:04:29.416633 containerd[1737]: 2026-04-21 12:04:29.409 [INFO][5781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89" Namespace="kube-system" Pod="coredns-66bc5c9577-4bsq7" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:04:29.492520 containerd[1737]: time="2026-04-21T12:04:29.488425014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 12:04:29.492520 containerd[1737]: time="2026-04-21T12:04:29.488503715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 12:04:29.492520 containerd[1737]: time="2026-04-21T12:04:29.488524816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:29.492520 containerd[1737]: time="2026-04-21T12:04:29.488613417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 12:04:29.535038 systemd[1]: Started cri-containerd-60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89.scope - libcontainer container 60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89. Apr 21 12:04:29.619069 containerd[1737]: time="2026-04-21T12:04:29.619013563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bsq7,Uid:4894f806-fc13-40ed-9637-207c1d2b6fe3,Namespace:kube-system,Attempt:1,} returns sandbox id \"60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89\"" Apr 21 12:04:29.633990 containerd[1737]: time="2026-04-21T12:04:29.633867707Z" level=info msg="CreateContainer within sandbox \"60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 12:04:29.683174 containerd[1737]: time="2026-04-21T12:04:29.683132518Z" level=info msg="CreateContainer within sandbox \"60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b64bf01b670887b41f1b737be3ac09e933891c67efe5d76ea000c6391618ebd6\"" Apr 21 12:04:29.684847 containerd[1737]: time="2026-04-21T12:04:29.684133334Z" level=info msg="StartContainer for \"b64bf01b670887b41f1b737be3ac09e933891c67efe5d76ea000c6391618ebd6\"" Apr 21 12:04:29.720008 systemd[1]: Started cri-containerd-b64bf01b670887b41f1b737be3ac09e933891c67efe5d76ea000c6391618ebd6.scope - libcontainer container b64bf01b670887b41f1b737be3ac09e933891c67efe5d76ea000c6391618ebd6. Apr 21 12:04:29.761921 containerd[1737]: time="2026-04-21T12:04:29.761881413Z" level=info msg="StartContainer for \"b64bf01b670887b41f1b737be3ac09e933891c67efe5d76ea000c6391618ebd6\" returns successfully" Apr 21 12:04:30.053746 containerd[1737]: time="2026-04-21T12:04:30.053691714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:30.061318 containerd[1737]: time="2026-04-21T12:04:30.061248339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 12:04:30.067484 containerd[1737]: time="2026-04-21T12:04:30.067398240Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:30.072853 containerd[1737]: time="2026-04-21T12:04:30.072435623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:30.073301 containerd[1737]: time="2026-04-21T12:04:30.073148034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.051975123s" Apr 21 12:04:30.073301 containerd[1737]: time="2026-04-21T12:04:30.073188535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 12:04:30.075873 containerd[1737]: time="2026-04-21T12:04:30.074700660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 12:04:30.094501 containerd[1737]: time="2026-04-21T12:04:30.094441985Z" level=info msg="CreateContainer within sandbox \"7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 12:04:30.131789 containerd[1737]: time="2026-04-21T12:04:30.131745999Z" level=info msg="CreateContainer within sandbox \"7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2f89cc909660576faf17e166f727e41c52e377ed8041fff9e87c9121df0f2980\"" Apr 21 12:04:30.133594 containerd[1737]: time="2026-04-21T12:04:30.132315808Z" level=info msg="StartContainer for \"2f89cc909660576faf17e166f727e41c52e377ed8041fff9e87c9121df0f2980\"" Apr 21 12:04:30.160994 systemd[1]: Started cri-containerd-2f89cc909660576faf17e166f727e41c52e377ed8041fff9e87c9121df0f2980.scope - libcontainer container 2f89cc909660576faf17e166f727e41c52e377ed8041fff9e87c9121df0f2980. Apr 21 12:04:30.207620 containerd[1737]: time="2026-04-21T12:04:30.207580646Z" level=info msg="StartContainer for \"2f89cc909660576faf17e166f727e41c52e377ed8041fff9e87c9121df0f2980\" returns successfully" Apr 21 12:04:30.386326 kubelet[3294]: I0421 12:04:30.385688 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76d9fbb898-hf2cc" podStartSLOduration=54.952116013 podStartE2EDuration="1m0.385667976s" podCreationTimestamp="2026-04-21 12:03:30 +0000 UTC" firstStartedPulling="2026-04-21 12:04:24.640916593 +0000 UTC m=+80.765849079" lastFinishedPulling="2026-04-21 12:04:30.074468456 +0000 UTC m=+86.199401042" observedRunningTime="2026-04-21 12:04:30.364235324 +0000 UTC m=+86.489167910" watchObservedRunningTime="2026-04-21 12:04:30.385667976 +0000 UTC m=+86.510600762" Apr 21 12:04:30.442729 kubelet[3294]: I0421 12:04:30.442649 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4bsq7" podStartSLOduration=81.442619213 podStartE2EDuration="1m21.442619213s" podCreationTimestamp="2026-04-21 12:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 12:04:30.385374171 +0000 UTC m=+86.510306757" watchObservedRunningTime="2026-04-21 12:04:30.442619213 +0000 UTC m=+86.567551799" Apr 21 12:04:30.494645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954762746.mount: Deactivated successfully. Apr 21 12:04:30.588141 systemd-networkd[1353]: calid827d556f7c: Gained IPv6LL Apr 21 12:04:30.588898 systemd-networkd[1353]: calic91903a191e: Gained IPv6LL Apr 21 12:04:33.377468 containerd[1737]: time="2026-04-21T12:04:33.377415399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:33.381041 containerd[1737]: time="2026-04-21T12:04:33.380976758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 12:04:33.385033 containerd[1737]: time="2026-04-21T12:04:33.384979024Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:33.390197 containerd[1737]: time="2026-04-21T12:04:33.390139309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:33.391095 containerd[1737]: time="2026-04-21T12:04:33.390957422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.316217562s" Apr 21 12:04:33.391095 containerd[1737]: time="2026-04-21T12:04:33.390995623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 12:04:33.393888 containerd[1737]: time="2026-04-21T12:04:33.392983855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 12:04:33.398964 containerd[1737]: time="2026-04-21T12:04:33.398936353Z" level=info msg="CreateContainer within sandbox \"591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 12:04:33.454162 containerd[1737]: time="2026-04-21T12:04:33.453993459Z" level=info msg="CreateContainer within sandbox \"591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff8df16547b9495037917f51b101c31dd990d14eaabb0bbc5f4b6f18ec297915\"" Apr 21 12:04:33.454811 containerd[1737]: time="2026-04-21T12:04:33.454772072Z" level=info msg="StartContainer for \"ff8df16547b9495037917f51b101c31dd990d14eaabb0bbc5f4b6f18ec297915\"" Apr 21 12:04:33.501990 systemd[1]: Started cri-containerd-ff8df16547b9495037917f51b101c31dd990d14eaabb0bbc5f4b6f18ec297915.scope - libcontainer container ff8df16547b9495037917f51b101c31dd990d14eaabb0bbc5f4b6f18ec297915. Apr 21 12:04:33.547558 containerd[1737]: time="2026-04-21T12:04:33.547509498Z" level=info msg="StartContainer for \"ff8df16547b9495037917f51b101c31dd990d14eaabb0bbc5f4b6f18ec297915\" returns successfully" Apr 21 12:04:34.626443 kubelet[3294]: I0421 12:04:34.626343 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c79d9c885-tjxlz" podStartSLOduration=57.667466568 podStartE2EDuration="1m5.626325147s" podCreationTimestamp="2026-04-21 12:03:29 +0000 UTC" firstStartedPulling="2026-04-21 12:04:25.433372864 +0000 UTC m=+81.558305350" lastFinishedPulling="2026-04-21 12:04:33.392231443 +0000 UTC m=+89.517163929" observedRunningTime="2026-04-21 12:04:34.373975095 +0000 UTC m=+90.498907581" watchObservedRunningTime="2026-04-21 12:04:34.626325147 +0000 UTC m=+90.751257633" Apr 21 12:04:35.232393 containerd[1737]: time="2026-04-21T12:04:35.232341218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:35.242156 containerd[1737]: time="2026-04-21T12:04:35.241234264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 12:04:35.242156 containerd[1737]: time="2026-04-21T12:04:35.242020277Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:35.248918 containerd[1737]: time="2026-04-21T12:04:35.248859590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:35.250027 containerd[1737]: time="2026-04-21T12:04:35.249534401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.856507045s" Apr 21 12:04:35.250027 containerd[1737]: time="2026-04-21T12:04:35.249577702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 12:04:35.251553 containerd[1737]: time="2026-04-21T12:04:35.251292930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 12:04:35.257994 containerd[1737]: time="2026-04-21T12:04:35.257964540Z" level=info msg="CreateContainer within sandbox \"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 12:04:35.297905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223426248.mount: Deactivated successfully. Apr 21 12:04:35.304748 containerd[1737]: time="2026-04-21T12:04:35.304677308Z" level=info msg="CreateContainer within sandbox \"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cb5dd52a5954062ce34fd8a59a8066d9b2a81c59059158c95301239b5cc946ec\"" Apr 21 12:04:35.305682 containerd[1737]: time="2026-04-21T12:04:35.305560923Z" level=info msg="StartContainer for \"cb5dd52a5954062ce34fd8a59a8066d9b2a81c59059158c95301239b5cc946ec\"" Apr 21 12:04:35.356005 systemd[1]: Started cri-containerd-cb5dd52a5954062ce34fd8a59a8066d9b2a81c59059158c95301239b5cc946ec.scope - libcontainer container cb5dd52a5954062ce34fd8a59a8066d9b2a81c59059158c95301239b5cc946ec. Apr 21 12:04:35.392424 containerd[1737]: time="2026-04-21T12:04:35.392376851Z" level=info msg="StartContainer for \"cb5dd52a5954062ce34fd8a59a8066d9b2a81c59059158c95301239b5cc946ec\" returns successfully" Apr 21 12:04:35.617917 containerd[1737]: time="2026-04-21T12:04:35.617767359Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:35.620911 containerd[1737]: time="2026-04-21T12:04:35.620430103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 12:04:35.622554 containerd[1737]: time="2026-04-21T12:04:35.622515538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 371.191607ms" Apr 21 12:04:35.622554 containerd[1737]: time="2026-04-21T12:04:35.622555838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 12:04:35.623930 containerd[1737]: time="2026-04-21T12:04:35.623529554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 12:04:35.635436 containerd[1737]: time="2026-04-21T12:04:35.635402650Z" level=info msg="CreateContainer within sandbox \"1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 12:04:35.680150 containerd[1737]: time="2026-04-21T12:04:35.680101519Z" level=info msg="CreateContainer within sandbox \"1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d9b619077f59ecfbf8ea72640e7bde428737310c5e8a705534fe24492dec3db8\"" Apr 21 12:04:35.681128 containerd[1737]: time="2026-04-21T12:04:35.681073332Z" level=info msg="StartContainer for \"d9b619077f59ecfbf8ea72640e7bde428737310c5e8a705534fe24492dec3db8\"" Apr 21 12:04:35.709006 systemd[1]: Started cri-containerd-d9b619077f59ecfbf8ea72640e7bde428737310c5e8a705534fe24492dec3db8.scope - libcontainer container d9b619077f59ecfbf8ea72640e7bde428737310c5e8a705534fe24492dec3db8. Apr 21 12:04:35.761089 containerd[1737]: time="2026-04-21T12:04:35.760958410Z" level=info msg="StartContainer for \"d9b619077f59ecfbf8ea72640e7bde428737310c5e8a705534fe24492dec3db8\" returns successfully" Apr 21 12:04:36.109681 kubelet[3294]: I0421 12:04:36.109644 3294 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 12:04:36.109681 kubelet[3294]: I0421 12:04:36.109695 3294 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 12:04:36.394856 kubelet[3294]: I0421 12:04:36.393750 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mqkhk" podStartSLOduration=55.660355313 podStartE2EDuration="1m6.393730948s" podCreationTimestamp="2026-04-21 12:03:30 +0000 UTC" firstStartedPulling="2026-04-21 12:04:24.517187583 +0000 UTC m=+80.642120069" lastFinishedPulling="2026-04-21 12:04:35.250563218 +0000 UTC m=+91.375495704" observedRunningTime="2026-04-21 12:04:36.392532432 +0000 UTC m=+92.517464918" watchObservedRunningTime="2026-04-21 12:04:36.393730948 +0000 UTC m=+92.518663534" Apr 21 12:04:36.827690 kubelet[3294]: I0421 12:04:36.827524 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c79d9c885-h4f6q" podStartSLOduration=58.98875596 podStartE2EDuration="1m7.827503001s" podCreationTimestamp="2026-04-21 12:03:29 +0000 UTC" firstStartedPulling="2026-04-21 12:04:26.78460951 +0000 UTC m=+82.909541996" lastFinishedPulling="2026-04-21 12:04:35.623356551 +0000 UTC m=+91.748289037" observedRunningTime="2026-04-21 12:04:36.414415527 +0000 UTC m=+92.539348013" watchObservedRunningTime="2026-04-21 12:04:36.827503001 +0000 UTC m=+92.952435487" Apr 21 12:04:38.392819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568464793.mount: Deactivated successfully. Apr 21 12:04:38.954330 containerd[1737]: time="2026-04-21T12:04:38.954281898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:38.957637 containerd[1737]: time="2026-04-21T12:04:38.957572342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 12:04:38.960945 containerd[1737]: time="2026-04-21T12:04:38.960871187Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:38.966184 containerd[1737]: time="2026-04-21T12:04:38.966131158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 12:04:38.967042 containerd[1737]: time="2026-04-21T12:04:38.966912068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.343348013s" Apr 21 12:04:38.967042 containerd[1737]: time="2026-04-21T12:04:38.966952269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 12:04:38.975892 containerd[1737]: time="2026-04-21T12:04:38.975856489Z" level=info msg="CreateContainer within sandbox \"43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 12:04:39.019125 containerd[1737]: time="2026-04-21T12:04:39.019078472Z" level=info msg="CreateContainer within sandbox \"43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"04e8035884dba17c8007cffb4ffcd20ea8935bee26ceadffb80734484f235aa6\"" Apr 21 12:04:39.019981 containerd[1737]: time="2026-04-21T12:04:39.019948184Z" level=info msg="StartContainer for \"04e8035884dba17c8007cffb4ffcd20ea8935bee26ceadffb80734484f235aa6\"" Apr 21 12:04:39.059994 systemd[1]: Started cri-containerd-04e8035884dba17c8007cffb4ffcd20ea8935bee26ceadffb80734484f235aa6.scope - libcontainer container 04e8035884dba17c8007cffb4ffcd20ea8935bee26ceadffb80734484f235aa6. Apr 21 12:04:39.108425 containerd[1737]: time="2026-04-21T12:04:39.108381577Z" level=info msg="StartContainer for \"04e8035884dba17c8007cffb4ffcd20ea8935bee26ceadffb80734484f235aa6\" returns successfully" Apr 21 12:04:45.363792 kubelet[3294]: I0421 12:04:45.363713 3294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-bzdfb" podStartSLOduration=65.293396295 podStartE2EDuration="1m15.363693583s" podCreationTimestamp="2026-04-21 12:03:30 +0000 UTC" firstStartedPulling="2026-04-21 12:04:28.897657094 +0000 UTC m=+85.022589580" lastFinishedPulling="2026-04-21 12:04:38.967954382 +0000 UTC m=+95.092886868" observedRunningTime="2026-04-21 12:04:39.411910472 +0000 UTC m=+95.536843058" watchObservedRunningTime="2026-04-21 12:04:45.363693583 +0000 UTC m=+101.488626069" Apr 21 12:05:03.995171 containerd[1737]: time="2026-04-21T12:05:03.995121999Z" level=info msg="StopPodSandbox for \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\"" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.036 [WARNING][6324] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"4c031d9c-a941-4ee9-ab73-567a7398ad1c", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae", Pod:"calico-apiserver-7c79d9c885-h4f6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali35bf80a7a8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.036 [INFO][6324] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.036 [INFO][6324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" iface="eth0" netns="" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.036 [INFO][6324] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.036 [INFO][6324] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.068 [INFO][6332] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.068 [INFO][6332] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.068 [INFO][6332] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.076 [WARNING][6332] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.076 [INFO][6332] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.078 [INFO][6332] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:04.084045 containerd[1737]: 2026-04-21 12:05:04.081 [INFO][6324] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.084045 containerd[1737]: time="2026-04-21T12:05:04.083685914Z" level=info msg="TearDown network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\" successfully" Apr 21 12:05:04.084045 containerd[1737]: time="2026-04-21T12:05:04.083712015Z" level=info msg="StopPodSandbox for \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\" returns successfully" Apr 21 12:05:04.087124 containerd[1737]: time="2026-04-21T12:05:04.084414426Z" level=info msg="RemovePodSandbox for \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\"" Apr 21 12:05:04.087124 containerd[1737]: time="2026-04-21T12:05:04.084446527Z" level=info msg="Forcibly stopping sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\"" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.127 [WARNING][6347] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"4c031d9c-a941-4ee9-ab73-567a7398ad1c", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"1607666dda3a89f7203d726fc49a1955c64fada9ac5ed996272d83c928adf0ae", Pod:"calico-apiserver-7c79d9c885-h4f6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali35bf80a7a8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.127 [INFO][6347] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.127 [INFO][6347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" iface="eth0" netns="" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.127 [INFO][6347] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.127 [INFO][6347] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.150 [INFO][6354] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.150 [INFO][6354] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.150 [INFO][6354] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.158 [WARNING][6354] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.158 [INFO][6354] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" HandleID="k8s-pod-network.6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--h4f6q-eth0" Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.159 [INFO][6354] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:04.163526 containerd[1737]: 2026-04-21 12:05:04.161 [INFO][6347] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a" Apr 21 12:05:04.164294 containerd[1737]: time="2026-04-21T12:05:04.163576345Z" level=info msg="TearDown network for sandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\" successfully" Apr 21 12:05:05.215413 containerd[1737]: time="2026-04-21T12:05:05.215356660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:05.216087 containerd[1737]: time="2026-04-21T12:05:05.215500562Z" level=info msg="RemovePodSandbox \"6dadd140aad5d884a18e7603fd40ff93257b336ef59a22caff625b0f5fbfb35a\" returns successfully" Apr 21 12:05:05.216243 containerd[1737]: time="2026-04-21T12:05:05.216216274Z" level=info msg="StopPodSandbox for \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\"" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.251 [WARNING][6369] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0", GenerateName:"calico-kube-controllers-76d9fbb898-", Namespace:"calico-system", SelfLink:"", UID:"9752a3e6-7576-4285-9a83-6bc365a16d48", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d9fbb898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4", Pod:"calico-kube-controllers-76d9fbb898-hf2cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94b672693f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.251 [INFO][6369] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.251 [INFO][6369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" iface="eth0" netns="" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.251 [INFO][6369] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.251 [INFO][6369] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.273 [INFO][6376] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.273 [INFO][6376] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.273 [INFO][6376] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.281 [WARNING][6376] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.281 [INFO][6376] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.282 [INFO][6376] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.285007 containerd[1737]: 2026-04-21 12:05:05.283 [INFO][6369] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.285859 containerd[1737]: time="2026-04-21T12:05:05.285043621Z" level=info msg="TearDown network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\" successfully" Apr 21 12:05:05.285859 containerd[1737]: time="2026-04-21T12:05:05.285073221Z" level=info msg="StopPodSandbox for \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\" returns successfully" Apr 21 12:05:05.285859 containerd[1737]: time="2026-04-21T12:05:05.285584730Z" level=info msg="RemovePodSandbox for \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\"" Apr 21 12:05:05.285859 containerd[1737]: time="2026-04-21T12:05:05.285620230Z" level=info msg="Forcibly stopping sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\"" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.318 [WARNING][6390] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0", GenerateName:"calico-kube-controllers-76d9fbb898-", Namespace:"calico-system", SelfLink:"", UID:"9752a3e6-7576-4285-9a83-6bc365a16d48", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76d9fbb898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"7b4400a4aa8ad7e0d4a3722ba8b13fe2eec016f31cf3f8c02d6a92a8277712d4", Pod:"calico-kube-controllers-76d9fbb898-hf2cc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali94b672693f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.318 [INFO][6390] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.318 [INFO][6390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" iface="eth0" netns="" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.318 [INFO][6390] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.318 [INFO][6390] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.341 [INFO][6397] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.341 [INFO][6397] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.341 [INFO][6397] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.346 [WARNING][6397] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.346 [INFO][6397] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" HandleID="k8s-pod-network.5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--kube--controllers--76d9fbb898--hf2cc-eth0" Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.348 [INFO][6397] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.351097 containerd[1737]: 2026-04-21 12:05:05.349 [INFO][6390] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc" Apr 21 12:05:05.351097 containerd[1737]: time="2026-04-21T12:05:05.350344808Z" level=info msg="TearDown network for sandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\" successfully" Apr 21 12:05:05.359786 containerd[1737]: time="2026-04-21T12:05:05.359750065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:05.359938 containerd[1737]: time="2026-04-21T12:05:05.359845666Z" level=info msg="RemovePodSandbox \"5599ff610897f105df592d3d209e1cc1c7665488bee0d79e3e3c4d0d6b4cdddc\" returns successfully" Apr 21 12:05:05.360401 containerd[1737]: time="2026-04-21T12:05:05.360369575Z" level=info msg="StopPodSandbox for \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\"" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.393 [WARNING][6411] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4894f806-fc13-40ed-9637-207c1d2b6fe3", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89", Pod:"coredns-66bc5c9577-4bsq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91903a191e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.394 [INFO][6411] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.394 [INFO][6411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" iface="eth0" netns="" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.394 [INFO][6411] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.394 [INFO][6411] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.417 [INFO][6418] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.417 [INFO][6418] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.417 [INFO][6418] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.423 [WARNING][6418] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.423 [INFO][6418] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.424 [INFO][6418] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.427093 containerd[1737]: 2026-04-21 12:05:05.425 [INFO][6411] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.427766 containerd[1737]: time="2026-04-21T12:05:05.427143287Z" level=info msg="TearDown network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\" successfully" Apr 21 12:05:05.427766 containerd[1737]: time="2026-04-21T12:05:05.427172287Z" level=info msg="StopPodSandbox for \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\" returns successfully" Apr 21 12:05:05.427918 containerd[1737]: time="2026-04-21T12:05:05.427788198Z" level=info msg="RemovePodSandbox for \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\"" Apr 21 12:05:05.427918 containerd[1737]: time="2026-04-21T12:05:05.427821998Z" level=info msg="Forcibly stopping sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\"" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.466 [WARNING][6432] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4894f806-fc13-40ed-9637-207c1d2b6fe3", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"60022999b950d02daba4fb152668abda38ff69b695fcf88e94b643938db82c89", Pod:"coredns-66bc5c9577-4bsq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91903a191e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.466 [INFO][6432] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.466 [INFO][6432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" iface="eth0" netns="" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.466 [INFO][6432] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.466 [INFO][6432] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.490 [INFO][6439] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.490 [INFO][6439] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.491 [INFO][6439] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.498 [WARNING][6439] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.498 [INFO][6439] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" HandleID="k8s-pod-network.86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--4bsq7-eth0" Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.499 [INFO][6439] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.502555 containerd[1737]: 2026-04-21 12:05:05.501 [INFO][6432] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3" Apr 21 12:05:05.503343 containerd[1737]: time="2026-04-21T12:05:05.502602144Z" level=info msg="TearDown network for sandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\" successfully" Apr 21 12:05:05.512616 containerd[1737]: time="2026-04-21T12:05:05.512566809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:05.512767 containerd[1737]: time="2026-04-21T12:05:05.512675111Z" level=info msg="RemovePodSandbox \"86d3d84c5b1c84a7dc1b9a1e6bb0995e2f2194d57b25e8f2f633b7fcfe8667f3\" returns successfully" Apr 21 12:05:05.513302 containerd[1737]: time="2026-04-21T12:05:05.513272021Z" level=info msg="StopPodSandbox for \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\"" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.545 [WARNING][6453] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20e52a3b-9a02-45ad-96fe-fb214b6cbb05", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c", Pod:"csi-node-driver-mqkhk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7c1f4a4707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.545 [INFO][6453] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.545 [INFO][6453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" iface="eth0" netns="" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.545 [INFO][6453] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.545 [INFO][6453] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.570 [INFO][6461] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.570 [INFO][6461] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.570 [INFO][6461] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.576 [WARNING][6461] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.576 [INFO][6461] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.577 [INFO][6461] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.580046 containerd[1737]: 2026-04-21 12:05:05.578 [INFO][6453] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.580865 containerd[1737]: time="2026-04-21T12:05:05.580020333Z" level=info msg="TearDown network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\" successfully" Apr 21 12:05:05.580865 containerd[1737]: time="2026-04-21T12:05:05.580531541Z" level=info msg="StopPodSandbox for \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\" returns successfully" Apr 21 12:05:05.581141 containerd[1737]: time="2026-04-21T12:05:05.581113051Z" level=info msg="RemovePodSandbox for \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\"" Apr 21 12:05:05.581212 containerd[1737]: time="2026-04-21T12:05:05.581151852Z" level=info msg="Forcibly stopping sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\"" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.616 [WARNING][6476] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"20e52a3b-9a02-45ad-96fe-fb214b6cbb05", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"4c7d9121c20c94fc11806f3958302fb02d1a11105fca96dbb5e9cec9ef6f773c", Pod:"csi-node-driver-mqkhk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7c1f4a4707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.616 [INFO][6476] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.616 [INFO][6476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" iface="eth0" netns="" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.616 [INFO][6476] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.616 [INFO][6476] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.638 [INFO][6483] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.638 [INFO][6483] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.638 [INFO][6483] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.645 [WARNING][6483] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.645 [INFO][6483] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" HandleID="k8s-pod-network.0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Workload="ci--4081.3.7--a--a89817d5a7-k8s-csi--node--driver--mqkhk-eth0" Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.647 [INFO][6483] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.649749 containerd[1737]: 2026-04-21 12:05:05.648 [INFO][6476] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e" Apr 21 12:05:05.649749 containerd[1737]: time="2026-04-21T12:05:05.649724494Z" level=info msg="TearDown network for sandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\" successfully" Apr 21 12:05:05.660020 containerd[1737]: time="2026-04-21T12:05:05.659965564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:05.660195 containerd[1737]: time="2026-04-21T12:05:05.660051966Z" level=info msg="RemovePodSandbox \"0b599b7220d73229fe8c37738673783b7d08808c72d152fd2234ae70f79ce73e\" returns successfully" Apr 21 12:05:05.660606 containerd[1737]: time="2026-04-21T12:05:05.660573274Z" level=info msg="StopPodSandbox for \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\"" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.694 [WARNING][6497] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"22bf69eb-1b19-496c-9e8c-76911e03643c", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7", Pod:"calico-apiserver-7c79d9c885-tjxlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali22de7d230cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.694 [INFO][6497] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.694 [INFO][6497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" iface="eth0" netns="" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.694 [INFO][6497] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.694 [INFO][6497] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.716 [INFO][6504] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.716 [INFO][6504] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.716 [INFO][6504] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.724 [WARNING][6504] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.724 [INFO][6504] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.725 [INFO][6504] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.727937 containerd[1737]: 2026-04-21 12:05:05.726 [INFO][6497] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.728787 containerd[1737]: time="2026-04-21T12:05:05.727981497Z" level=info msg="TearDown network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\" successfully" Apr 21 12:05:05.728787 containerd[1737]: time="2026-04-21T12:05:05.728013197Z" level=info msg="StopPodSandbox for \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\" returns successfully" Apr 21 12:05:05.728938 containerd[1737]: time="2026-04-21T12:05:05.728745310Z" level=info msg="RemovePodSandbox for \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\"" Apr 21 12:05:05.728938 containerd[1737]: time="2026-04-21T12:05:05.728881312Z" level=info msg="Forcibly stopping sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\"" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.766 [WARNING][6518] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0", GenerateName:"calico-apiserver-7c79d9c885-", Namespace:"calico-system", SelfLink:"", UID:"22bf69eb-1b19-496c-9e8c-76911e03643c", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c79d9c885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"591c734b4fcffc684522b7989740fe2aca92c21b65e7dbb680055d3b74049ce7", Pod:"calico-apiserver-7c79d9c885-tjxlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali22de7d230cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.766 [INFO][6518] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.766 [INFO][6518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" iface="eth0" netns="" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.766 [INFO][6518] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.766 [INFO][6518] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.788 [INFO][6526] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.789 [INFO][6526] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.789 [INFO][6526] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.795 [WARNING][6526] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.795 [INFO][6526] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" HandleID="k8s-pod-network.1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Workload="ci--4081.3.7--a--a89817d5a7-k8s-calico--apiserver--7c79d9c885--tjxlz-eth0" Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.796 [INFO][6526] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.799421 containerd[1737]: 2026-04-21 12:05:05.798 [INFO][6518] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2" Apr 21 12:05:05.799421 containerd[1737]: time="2026-04-21T12:05:05.799389286Z" level=info msg="TearDown network for sandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\" successfully" Apr 21 12:05:05.811112 containerd[1737]: time="2026-04-21T12:05:05.811064780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:05.811254 containerd[1737]: time="2026-04-21T12:05:05.811144482Z" level=info msg="RemovePodSandbox \"1ece7f60ead71a4c95269c017ece75fb862eee68f8053646c76e37f2ff78fcf2\" returns successfully" Apr 21 12:05:05.811691 containerd[1737]: time="2026-04-21T12:05:05.811663290Z" level=info msg="StopPodSandbox for \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\"" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.845 [WARNING][6540] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.845 [INFO][6540] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.845 [INFO][6540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" iface="eth0" netns="" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.845 [INFO][6540] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.845 [INFO][6540] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.866 [INFO][6547] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.866 [INFO][6547] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.866 [INFO][6547] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.872 [WARNING][6547] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.872 [INFO][6547] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.875 [INFO][6547] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.877547 containerd[1737]: 2026-04-21 12:05:05.876 [INFO][6540] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.878911 containerd[1737]: time="2026-04-21T12:05:05.877591888Z" level=info msg="TearDown network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\" successfully" Apr 21 12:05:05.878911 containerd[1737]: time="2026-04-21T12:05:05.877626089Z" level=info msg="StopPodSandbox for \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\" returns successfully" Apr 21 12:05:05.878911 containerd[1737]: time="2026-04-21T12:05:05.878200498Z" level=info msg="RemovePodSandbox for \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\"" Apr 21 12:05:05.878911 containerd[1737]: time="2026-04-21T12:05:05.878228899Z" level=info msg="Forcibly stopping sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\"" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.909 [WARNING][6561] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" WorkloadEndpoint="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.909 [INFO][6561] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.909 [INFO][6561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" iface="eth0" netns="" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.909 [INFO][6561] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.909 [INFO][6561] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.931 [INFO][6568] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.931 [INFO][6568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.931 [INFO][6568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.936 [WARNING][6568] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.936 [INFO][6568] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" HandleID="k8s-pod-network.8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Workload="ci--4081.3.7--a--a89817d5a7-k8s-whisker--68f656b4d6--v9c6d-eth0" Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.938 [INFO][6568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:05.940625 containerd[1737]: 2026-04-21 12:05:05.939 [INFO][6561] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25" Apr 21 12:05:05.941241 containerd[1737]: time="2026-04-21T12:05:05.940685939Z" level=info msg="TearDown network for sandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\" successfully" Apr 21 12:05:05.953015 containerd[1737]: time="2026-04-21T12:05:05.952970944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:05.953237 containerd[1737]: time="2026-04-21T12:05:05.953060045Z" level=info msg="RemovePodSandbox \"8299b2f7dbd1e2be9872173e4551e0b8cebb0bac8ce742137b42bbe2a7098d25\" returns successfully" Apr 21 12:05:05.953540 containerd[1737]: time="2026-04-21T12:05:05.953510553Z" level=info msg="StopPodSandbox for \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\"" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:05.988 [WARNING][6582] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8f4b8722-a01d-4ba7-afd8-a88d111a2e76", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3", Pod:"goldmane-cccfbd5cf-bzdfb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid827d556f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:05.989 [INFO][6582] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:05.989 [INFO][6582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" iface="eth0" netns="" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:05.989 [INFO][6582] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:05.989 [INFO][6582] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:06.012 [INFO][6590] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:06.012 [INFO][6590] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:06.012 [INFO][6590] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:06.017 [WARNING][6590] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:06.017 [INFO][6590] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:06.019 [INFO][6590] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:06.021507 containerd[1737]: 2026-04-21 12:05:06.020 [INFO][6582] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.021507 containerd[1737]: time="2026-04-21T12:05:06.021377383Z" level=info msg="TearDown network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\" successfully" Apr 21 12:05:06.021507 containerd[1737]: time="2026-04-21T12:05:06.021408483Z" level=info msg="StopPodSandbox for \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\" returns successfully" Apr 21 12:05:06.022228 containerd[1737]: time="2026-04-21T12:05:06.022015893Z" level=info msg="RemovePodSandbox for \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\"" Apr 21 12:05:06.022228 containerd[1737]: time="2026-04-21T12:05:06.022055294Z" level=info msg="Forcibly stopping sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\"" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.057 [WARNING][6604] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8f4b8722-a01d-4ba7-afd8-a88d111a2e76", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"43567183befb33d70ffae03b884f19272eb940595c79d19cfe753035103f2eb3", Pod:"goldmane-cccfbd5cf-bzdfb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid827d556f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.057 [INFO][6604] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.057 [INFO][6604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" iface="eth0" netns="" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.057 [INFO][6604] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.057 [INFO][6604] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.080 [INFO][6611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.080 [INFO][6611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.080 [INFO][6611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.086 [WARNING][6611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.086 [INFO][6611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" HandleID="k8s-pod-network.6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Workload="ci--4081.3.7--a--a89817d5a7-k8s-goldmane--cccfbd5cf--bzdfb-eth0" Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.087 [INFO][6611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:06.090767 containerd[1737]: 2026-04-21 12:05:06.088 [INFO][6604] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6" Apr 21 12:05:06.090767 containerd[1737]: time="2026-04-21T12:05:06.090613336Z" level=info msg="TearDown network for sandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\" successfully" Apr 21 12:05:06.101432 containerd[1737]: time="2026-04-21T12:05:06.101361115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:06.101552 containerd[1737]: time="2026-04-21T12:05:06.101456116Z" level=info msg="RemovePodSandbox \"6d3e0b71f30bbb12e4acb31217bf83834638c834d6fe1382d1b802cd746dc3c6\" returns successfully" Apr 21 12:05:06.102048 containerd[1737]: time="2026-04-21T12:05:06.102008925Z" level=info msg="StopPodSandbox for \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\"" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.135 [WARNING][6626] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e240fd29-afe5-4e92-98bd-6ce65bc08a12", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43", Pod:"coredns-66bc5c9577-g54k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22f6e0c37cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.135 [INFO][6626] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.135 [INFO][6626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" iface="eth0" netns="" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.135 [INFO][6626] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.135 [INFO][6626] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.159 [INFO][6633] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.159 [INFO][6633] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.159 [INFO][6633] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.165 [WARNING][6633] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.165 [INFO][6633] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.166 [INFO][6633] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:06.169476 containerd[1737]: 2026-04-21 12:05:06.168 [INFO][6626] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.169476 containerd[1737]: time="2026-04-21T12:05:06.169353747Z" level=info msg="TearDown network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\" successfully" Apr 21 12:05:06.169476 containerd[1737]: time="2026-04-21T12:05:06.169377247Z" level=info msg="StopPodSandbox for \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\" returns successfully" Apr 21 12:05:06.170275 containerd[1737]: time="2026-04-21T12:05:06.169901856Z" level=info msg="RemovePodSandbox for \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\"" Apr 21 12:05:06.170275 containerd[1737]: time="2026-04-21T12:05:06.169933457Z" level=info msg="Forcibly stopping sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\"" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.204 [WARNING][6648] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e240fd29-afe5-4e92-98bd-6ce65bc08a12", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 12, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-a89817d5a7", ContainerID:"4455ae9abde9639999ab8103d69cb5856e11c37ebe95bb6052744d2ff7a3fc43", Pod:"coredns-66bc5c9577-g54k5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22f6e0c37cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.204 [INFO][6648] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.205 [INFO][6648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" iface="eth0" netns="" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.205 [INFO][6648] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.205 [INFO][6648] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.227 [INFO][6656] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.227 [INFO][6656] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.227 [INFO][6656] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.234 [WARNING][6656] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.234 [INFO][6656] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" HandleID="k8s-pod-network.c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Workload="ci--4081.3.7--a--a89817d5a7-k8s-coredns--66bc5c9577--g54k5-eth0" Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.236 [INFO][6656] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 12:05:06.238802 containerd[1737]: 2026-04-21 12:05:06.237 [INFO][6648] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a" Apr 21 12:05:06.239908 containerd[1737]: time="2026-04-21T12:05:06.238871005Z" level=info msg="TearDown network for sandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\" successfully" Apr 21 12:05:06.247743 containerd[1737]: time="2026-04-21T12:05:06.247670251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 12:05:06.247898 containerd[1737]: time="2026-04-21T12:05:06.247771953Z" level=info msg="RemovePodSandbox \"c9c2ba3f7c3fdb011f65dbcc1555c6f81669829c7f3afbb3fd8d01ef4e01363a\" returns successfully" Apr 21 12:05:22.246600 systemd[1]: Started sshd@7-10.0.0.17:22-20.229.252.112:46836.service - OpenSSH per-connection server daemon (20.229.252.112:46836). Apr 21 12:05:22.377067 sshd[6764]: Accepted publickey for core from 20.229.252.112 port 46836 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:22.378784 sshd[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:22.384418 systemd-logind[1709]: New session 10 of user core. Apr 21 12:05:22.390016 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 12:05:22.562215 sshd[6764]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:22.567080 systemd[1]: sshd@7-10.0.0.17:22-20.229.252.112:46836.service: Deactivated successfully. Apr 21 12:05:22.570461 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 12:05:22.571486 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Apr 21 12:05:22.572749 systemd-logind[1709]: Removed session 10. Apr 21 12:05:27.594474 systemd[1]: Started sshd@8-10.0.0.17:22-20.229.252.112:52932.service - OpenSSH per-connection server daemon (20.229.252.112:52932). Apr 21 12:05:27.715938 sshd[6778]: Accepted publickey for core from 20.229.252.112 port 52932 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:27.717539 sshd[6778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:27.721915 systemd-logind[1709]: New session 11 of user core. Apr 21 12:05:27.727001 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 12:05:27.884156 sshd[6778]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:27.888289 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Apr 21 12:05:27.888979 systemd[1]: sshd@8-10.0.0.17:22-20.229.252.112:52932.service: Deactivated successfully. Apr 21 12:05:27.891237 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 12:05:27.892214 systemd-logind[1709]: Removed session 11. Apr 21 12:05:32.915152 systemd[1]: Started sshd@9-10.0.0.17:22-20.229.252.112:52948.service - OpenSSH per-connection server daemon (20.229.252.112:52948). Apr 21 12:05:33.034072 sshd[6810]: Accepted publickey for core from 20.229.252.112 port 52948 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:33.035576 sshd[6810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:33.039881 systemd-logind[1709]: New session 12 of user core. Apr 21 12:05:33.047208 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 12:05:33.213191 sshd[6810]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:33.216626 systemd[1]: sshd@9-10.0.0.17:22-20.229.252.112:52948.service: Deactivated successfully. Apr 21 12:05:33.218811 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 12:05:33.220745 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Apr 21 12:05:33.221902 systemd-logind[1709]: Removed session 12. Apr 21 12:05:38.245242 systemd[1]: Started sshd@10-10.0.0.17:22-20.229.252.112:37428.service - OpenSSH per-connection server daemon (20.229.252.112:37428). Apr 21 12:05:38.365224 sshd[6838]: Accepted publickey for core from 20.229.252.112 port 37428 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:38.366873 sshd[6838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:38.372410 systemd-logind[1709]: New session 13 of user core. Apr 21 12:05:38.378984 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 12:05:38.543481 sshd[6838]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:38.547920 systemd[1]: sshd@10-10.0.0.17:22-20.229.252.112:37428.service: Deactivated successfully. Apr 21 12:05:38.550672 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 12:05:38.552928 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Apr 21 12:05:38.553997 systemd-logind[1709]: Removed session 13. Apr 21 12:05:43.571611 systemd[1]: Started sshd@11-10.0.0.17:22-20.229.252.112:37436.service - OpenSSH per-connection server daemon (20.229.252.112:37436). Apr 21 12:05:43.697076 sshd[6873]: Accepted publickey for core from 20.229.252.112 port 37436 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:43.698587 sshd[6873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:43.702892 systemd-logind[1709]: New session 14 of user core. Apr 21 12:05:43.709020 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 12:05:43.874531 sshd[6873]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:43.879128 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Apr 21 12:05:43.880321 systemd[1]: sshd@11-10.0.0.17:22-20.229.252.112:37436.service: Deactivated successfully. Apr 21 12:05:43.882453 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 12:05:43.884264 systemd-logind[1709]: Removed session 14. Apr 21 12:05:48.906160 systemd[1]: Started sshd@12-10.0.0.17:22-20.229.252.112:42210.service - OpenSSH per-connection server daemon (20.229.252.112:42210). Apr 21 12:05:49.037558 sshd[6924]: Accepted publickey for core from 20.229.252.112 port 42210 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:49.039291 sshd[6924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:49.043447 systemd-logind[1709]: New session 15 of user core. Apr 21 12:05:49.049014 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 12:05:49.208983 sshd[6924]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:49.213055 systemd[1]: sshd@12-10.0.0.17:22-20.229.252.112:42210.service: Deactivated successfully. Apr 21 12:05:49.215355 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 12:05:49.216351 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Apr 21 12:05:49.218164 systemd-logind[1709]: Removed session 15. Apr 21 12:05:49.232760 systemd[1]: Started sshd@13-10.0.0.17:22-20.229.252.112:42222.service - OpenSSH per-connection server daemon (20.229.252.112:42222). Apr 21 12:05:49.361464 sshd[6938]: Accepted publickey for core from 20.229.252.112 port 42222 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:49.363087 sshd[6938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:49.368951 systemd-logind[1709]: New session 16 of user core. Apr 21 12:05:49.373992 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 12:05:49.607230 sshd[6938]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:49.611349 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Apr 21 12:05:49.613106 systemd[1]: sshd@13-10.0.0.17:22-20.229.252.112:42222.service: Deactivated successfully. Apr 21 12:05:49.616485 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 12:05:49.620684 systemd-logind[1709]: Removed session 16. Apr 21 12:05:49.642171 systemd[1]: Started sshd@14-10.0.0.17:22-20.229.252.112:42232.service - OpenSSH per-connection server daemon (20.229.252.112:42232). Apr 21 12:05:49.778310 sshd[6973]: Accepted publickey for core from 20.229.252.112 port 42232 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:49.780329 sshd[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:49.784434 systemd-logind[1709]: New session 17 of user core. Apr 21 12:05:49.787997 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 12:05:49.948998 sshd[6973]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:49.952965 systemd[1]: sshd@14-10.0.0.17:22-20.229.252.112:42232.service: Deactivated successfully. Apr 21 12:05:49.955476 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 12:05:49.957616 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Apr 21 12:05:49.959105 systemd-logind[1709]: Removed session 17. Apr 21 12:05:54.986941 systemd[1]: Started sshd@15-10.0.0.17:22-20.229.252.112:36126.service - OpenSSH per-connection server daemon (20.229.252.112:36126). Apr 21 12:05:55.113368 sshd[7003]: Accepted publickey for core from 20.229.252.112 port 36126 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:55.117610 sshd[7003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:55.128182 systemd-logind[1709]: New session 18 of user core. Apr 21 12:05:55.134457 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 12:05:55.294782 sshd[7003]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:55.299613 systemd[1]: sshd@15-10.0.0.17:22-20.229.252.112:36126.service: Deactivated successfully. Apr 21 12:05:55.299955 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Apr 21 12:05:55.302522 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 12:05:55.303728 systemd-logind[1709]: Removed session 18. Apr 21 12:05:55.320764 systemd[1]: Started sshd@16-10.0.0.17:22-20.229.252.112:36128.service - OpenSSH per-connection server daemon (20.229.252.112:36128). Apr 21 12:05:55.450261 sshd[7015]: Accepted publickey for core from 20.229.252.112 port 36128 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:55.451753 sshd[7015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:55.456898 systemd-logind[1709]: New session 19 of user core. Apr 21 12:05:55.465024 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 12:05:55.681393 sshd[7015]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:55.685899 systemd[1]: sshd@16-10.0.0.17:22-20.229.252.112:36128.service: Deactivated successfully. Apr 21 12:05:55.688239 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 12:05:55.689055 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Apr 21 12:05:55.690242 systemd-logind[1709]: Removed session 19. Apr 21 12:05:55.707724 systemd[1]: Started sshd@17-10.0.0.17:22-20.229.252.112:36132.service - OpenSSH per-connection server daemon (20.229.252.112:36132). Apr 21 12:05:55.830590 sshd[7025]: Accepted publickey for core from 20.229.252.112 port 36132 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:55.831230 sshd[7025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:55.835875 systemd-logind[1709]: New session 20 of user core. Apr 21 12:05:55.840995 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 12:05:56.792264 sshd[7025]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:56.798415 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Apr 21 12:05:56.801670 systemd[1]: sshd@17-10.0.0.17:22-20.229.252.112:36132.service: Deactivated successfully. Apr 21 12:05:56.807882 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 12:05:56.810995 systemd-logind[1709]: Removed session 20. Apr 21 12:05:56.826470 systemd[1]: Started sshd@18-10.0.0.17:22-20.229.252.112:36144.service - OpenSSH per-connection server daemon (20.229.252.112:36144). Apr 21 12:05:56.955921 sshd[7049]: Accepted publickey for core from 20.229.252.112 port 36144 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:56.957020 sshd[7049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:56.962300 systemd-logind[1709]: New session 21 of user core. Apr 21 12:05:56.967008 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 12:05:57.246059 sshd[7049]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:57.250695 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Apr 21 12:05:57.252737 systemd[1]: sshd@18-10.0.0.17:22-20.229.252.112:36144.service: Deactivated successfully. Apr 21 12:05:57.256505 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 12:05:57.261461 systemd-logind[1709]: Removed session 21. Apr 21 12:05:57.279143 systemd[1]: Started sshd@19-10.0.0.17:22-20.229.252.112:36148.service - OpenSSH per-connection server daemon (20.229.252.112:36148). Apr 21 12:05:57.401495 sshd[7060]: Accepted publickey for core from 20.229.252.112 port 36148 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:05:57.403022 sshd[7060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:05:57.407359 systemd-logind[1709]: New session 22 of user core. Apr 21 12:05:57.417085 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 12:05:57.569737 sshd[7060]: pam_unix(sshd:session): session closed for user core Apr 21 12:05:57.574262 systemd[1]: sshd@19-10.0.0.17:22-20.229.252.112:36148.service: Deactivated successfully. Apr 21 12:05:57.576218 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 12:05:57.577422 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Apr 21 12:05:57.578797 systemd-logind[1709]: Removed session 22. Apr 21 12:06:02.602202 systemd[1]: Started sshd@20-10.0.0.17:22-20.229.252.112:36162.service - OpenSSH per-connection server daemon (20.229.252.112:36162). Apr 21 12:06:02.719872 sshd[7095]: Accepted publickey for core from 20.229.252.112 port 36162 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:02.721655 sshd[7095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:02.728813 systemd-logind[1709]: New session 23 of user core. Apr 21 12:06:02.736027 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 12:06:02.894093 sshd[7095]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:02.897006 systemd[1]: sshd@20-10.0.0.17:22-20.229.252.112:36162.service: Deactivated successfully. Apr 21 12:06:02.899432 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 12:06:02.901355 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Apr 21 12:06:02.902458 systemd-logind[1709]: Removed session 23. Apr 21 12:06:07.924145 systemd[1]: Started sshd@21-10.0.0.17:22-20.229.252.112:36640.service - OpenSSH per-connection server daemon (20.229.252.112:36640). Apr 21 12:06:08.041176 sshd[7115]: Accepted publickey for core from 20.229.252.112 port 36640 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:08.042636 sshd[7115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:08.047232 systemd-logind[1709]: New session 24 of user core. Apr 21 12:06:08.056027 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 12:06:08.205370 sshd[7115]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:08.209576 systemd[1]: sshd@21-10.0.0.17:22-20.229.252.112:36640.service: Deactivated successfully. Apr 21 12:06:08.212089 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 12:06:08.213272 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Apr 21 12:06:08.214423 systemd-logind[1709]: Removed session 24. Apr 21 12:06:13.236144 systemd[1]: Started sshd@22-10.0.0.17:22-20.229.252.112:36654.service - OpenSSH per-connection server daemon (20.229.252.112:36654). Apr 21 12:06:13.354249 sshd[7189]: Accepted publickey for core from 20.229.252.112 port 36654 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:13.355682 sshd[7189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:13.359880 systemd-logind[1709]: New session 25 of user core. Apr 21 12:06:13.366991 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 12:06:13.519713 sshd[7189]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:13.523860 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Apr 21 12:06:13.524568 systemd[1]: sshd@22-10.0.0.17:22-20.229.252.112:36654.service: Deactivated successfully. Apr 21 12:06:13.527122 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 12:06:13.528376 systemd-logind[1709]: Removed session 25. Apr 21 12:06:18.555134 systemd[1]: Started sshd@23-10.0.0.17:22-20.229.252.112:44840.service - OpenSSH per-connection server daemon (20.229.252.112:44840). Apr 21 12:06:18.676434 sshd[7224]: Accepted publickey for core from 20.229.252.112 port 44840 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:18.677101 sshd[7224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:18.682232 systemd-logind[1709]: New session 26 of user core. Apr 21 12:06:18.686108 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 12:06:18.847615 sshd[7224]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:18.852922 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Apr 21 12:06:18.853601 systemd[1]: sshd@23-10.0.0.17:22-20.229.252.112:44840.service: Deactivated successfully. Apr 21 12:06:18.855642 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 12:06:18.856754 systemd-logind[1709]: Removed session 26. Apr 21 12:06:23.882175 systemd[1]: Started sshd@24-10.0.0.17:22-20.229.252.112:44854.service - OpenSSH per-connection server daemon (20.229.252.112:44854). Apr 21 12:06:24.019861 sshd[7241]: Accepted publickey for core from 20.229.252.112 port 44854 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:24.020864 sshd[7241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:24.026144 systemd-logind[1709]: New session 27 of user core. Apr 21 12:06:24.029033 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 21 12:06:24.184613 sshd[7241]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:24.188240 systemd[1]: sshd@24-10.0.0.17:22-20.229.252.112:44854.service: Deactivated successfully. Apr 21 12:06:24.190583 systemd[1]: session-27.scope: Deactivated successfully. Apr 21 12:06:24.192965 systemd-logind[1709]: Session 27 logged out. Waiting for processes to exit. Apr 21 12:06:24.194224 systemd-logind[1709]: Removed session 27. Apr 21 12:06:29.217155 systemd[1]: Started sshd@25-10.0.0.17:22-20.229.252.112:58898.service - OpenSSH per-connection server daemon (20.229.252.112:58898). Apr 21 12:06:29.336981 sshd[7254]: Accepted publickey for core from 20.229.252.112 port 58898 ssh2: RSA SHA256:/nU+JYutxcadp5FHnPMNTX4JZPpw85YbQ/9XYRgFgds Apr 21 12:06:29.338600 sshd[7254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 12:06:29.343428 systemd-logind[1709]: New session 28 of user core. Apr 21 12:06:29.348989 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 21 12:06:29.501483 sshd[7254]: pam_unix(sshd:session): session closed for user core Apr 21 12:06:29.505502 systemd[1]: sshd@25-10.0.0.17:22-20.229.252.112:58898.service: Deactivated successfully. Apr 21 12:06:29.507584 systemd[1]: session-28.scope: Deactivated successfully. Apr 21 12:06:29.508397 systemd-logind[1709]: Session 28 logged out. Waiting for processes to exit. Apr 21 12:06:29.509578 systemd-logind[1709]: Removed session 28. Apr 21 12:06:30.372098 systemd[1]: run-containerd-runc-k8s.io-2f89cc909660576faf17e166f727e41c52e377ed8041fff9e87c9121df0f2980-runc.tuh38d.mount: Deactivated successfully.