Apr 24 23:57:17.117997 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:57:17.118023 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:57:17.118038 kernel: BIOS-provided physical RAM map: Apr 24 23:57:17.118044 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:57:17.118050 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 24 23:57:17.118055 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 24 23:57:17.118066 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 24 23:57:17.118074 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 24 23:57:17.118082 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 24 23:57:17.118090 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 24 23:57:17.118099 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 24 23:57:17.118105 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 24 23:57:17.118111 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 24 23:57:17.118118 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 24 23:57:17.118132 kernel: printk: bootconsole [earlyser0] enabled Apr 24 23:57:17.118139 kernel: NX (Execute Disable) protection: active Apr 24 23:57:17.118146 kernel: APIC: Static calls initialized Apr 24 23:57:17.118157 kernel: efi: EFI v2.7 by Microsoft Apr 24 23:57:17.118165 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420518 Apr 24 23:57:17.118172 kernel: SMBIOS 3.1.0 present. Apr 24 23:57:17.118181 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 24 23:57:17.118190 kernel: Hypervisor detected: Microsoft Hyper-V Apr 24 23:57:17.118197 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 24 23:57:17.118204 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 24 23:57:17.118216 kernel: Hyper-V: Nested features: 0x1e0101 Apr 24 23:57:17.118225 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 24 23:57:17.118231 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 24 23:57:17.118241 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:57:17.118250 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:57:17.118258 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 24 23:57:17.118270 kernel: tsc: Detected 2593.907 MHz processor Apr 24 23:57:17.118277 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:57:17.118289 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:57:17.118296 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 24 23:57:17.118309 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:57:17.118316 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:57:17.118323 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 24 23:57:17.118330 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 24 23:57:17.118338 kernel: Using GB pages for direct mapping Apr 24 23:57:17.118349 kernel: Secure boot disabled Apr 24 23:57:17.118359 kernel: ACPI: Early table checksum verification disabled Apr 24 23:57:17.118374 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 24 23:57:17.118382 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118389 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118402 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 24 23:57:17.118409 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 24 23:57:17.118417 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118429 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118439 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118448 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118458 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118465 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118473 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 24 23:57:17.118485 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 24 23:57:17.118492 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 24 23:57:17.118501 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 24 23:57:17.118511 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 24 23:57:17.118521 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 24 23:57:17.118531 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 24 23:57:17.118540 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 24 23:57:17.118547 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 24 23:57:17.118557 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:57:17.118566 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:57:17.118582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 24 23:57:17.118593 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 24 23:57:17.118600 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 24 23:57:17.118613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 24 23:57:17.118621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 24 23:57:17.118629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 24 23:57:17.118639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 24 23:57:17.118648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 24 23:57:17.118655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 24 23:57:17.118666 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 24 23:57:17.118675 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 24 23:57:17.118684 kernel: NODE_DATA(0) allocated [mem 0x2bfff8000-0x2bfffdfff] Apr 24 23:57:17.118695 kernel: Zone ranges: Apr 24 23:57:17.118704 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:57:17.118711 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:57:17.118722 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:57:17.118731 kernel: Movable zone start for each node Apr 24 23:57:17.118738 kernel: Early memory node ranges Apr 24 23:57:17.118749 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:57:17.118758 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 24 23:57:17.118772 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 24 23:57:17.118779 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 24 23:57:17.118791 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:57:17.118801 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 24 23:57:17.118811 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:57:17.118818 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:57:17.118826 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 24 23:57:17.118834 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 24 23:57:17.118845 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 24 23:57:17.118855 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 24 23:57:17.118865 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:57:17.118874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:57:17.118882 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:57:17.118891 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 24 23:57:17.118901 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:57:17.118909 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 24 23:57:17.118916 kernel: Booting paravirtualized kernel on Hyper-V Apr 24 23:57:17.118924 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:57:17.118933 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:57:17.118945 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:57:17.118953 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:57:17.118960 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:57:17.118972 kernel: Hyper-V: PV spinlocks enabled Apr 24 23:57:17.118980 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:57:17.118989 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:57:17.119001 kernel: random: crng init done Apr 24 23:57:17.119013 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 24 23:57:17.119025 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:57:17.119033 kernel: Fallback order for Node 0: 0 Apr 24 23:57:17.119045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 24 23:57:17.119056 kernel: Policy zone: Normal Apr 24 23:57:17.119063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:57:17.119070 kernel: software IO TLB: area num 2. Apr 24 23:57:17.119083 kernel: Memory: 8061204K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 321764K reserved, 0K cma-reserved) Apr 24 23:57:17.119091 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:57:17.119111 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:57:17.119123 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:57:17.119131 kernel: Dynamic Preempt: voluntary Apr 24 23:57:17.119146 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:57:17.119155 kernel: rcu: RCU event tracing is enabled. Apr 24 23:57:17.119163 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:57:17.119176 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:57:17.119184 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:57:17.119192 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:57:17.119207 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:57:17.119215 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:57:17.119227 kernel: Using NULL legacy PIC Apr 24 23:57:17.119235 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 24 23:57:17.119248 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:57:17.119256 kernel: Console: colour dummy device 80x25 Apr 24 23:57:17.119269 kernel: printk: console [tty1] enabled Apr 24 23:57:17.119278 kernel: printk: console [ttyS0] enabled Apr 24 23:57:17.119292 kernel: printk: bootconsole [earlyser0] disabled Apr 24 23:57:17.119305 kernel: ACPI: Core revision 20230628 Apr 24 23:57:17.119318 kernel: Failed to register legacy timer interrupt Apr 24 23:57:17.119332 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:57:17.119345 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 24 23:57:17.119360 kernel: Hyper-V: Using IPI hypercalls Apr 24 23:57:17.119374 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 24 23:57:17.119388 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 24 23:57:17.119403 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 24 23:57:17.119423 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 24 23:57:17.119438 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 24 23:57:17.119454 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 24 23:57:17.119470 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Apr 24 23:57:17.119486 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:57:17.119502 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:57:17.119517 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:57:17.119532 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:57:17.119547 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:57:17.119563 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:57:17.119600 kernel: RETBleed: Vulnerable Apr 24 23:57:17.119616 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:57:17.119632 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:57:17.119648 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:57:17.119663 kernel: active return thunk: its_return_thunk Apr 24 23:57:17.119679 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:57:17.119692 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:57:17.119705 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:57:17.119720 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:57:17.119733 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:57:17.119751 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:57:17.119764 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:57:17.119776 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:57:17.119789 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:57:17.119803 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:57:17.119818 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:57:17.119833 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:57:17.119847 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:57:17.119862 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:57:17.119877 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:57:17.119892 kernel: landlock: Up and running. Apr 24 23:57:17.119907 kernel: SELinux: Initializing. Apr 24 23:57:17.119924 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.119940 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.119955 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 24 23:57:17.119970 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:57:17.119985 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:57:17.119999 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:57:17.120013 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:57:17.120028 kernel: signal: max sigframe size: 3632 Apr 24 23:57:17.120042 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:57:17.120059 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:57:17.120074 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:57:17.120089 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:57:17.120105 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:57:17.120120 kernel: .... node #0, CPUs: #1 Apr 24 23:57:17.120135 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 24 23:57:17.120151 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:57:17.120167 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:57:17.120182 kernel: smpboot: Max logical packages: 1 Apr 24 23:57:17.120199 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 24 23:57:17.120213 kernel: devtmpfs: initialized Apr 24 23:57:17.120227 kernel: x86/mm: Memory block size: 128MB Apr 24 23:57:17.120240 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 24 23:57:17.120253 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:57:17.120268 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:57:17.120282 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:57:17.120295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:57:17.120308 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:57:17.120325 kernel: audit: type=2000 audit(1777075035.029:1): state=initialized audit_enabled=0 res=1 Apr 24 23:57:17.120339 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:57:17.120365 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:57:17.120378 kernel: cpuidle: using governor menu Apr 24 23:57:17.120392 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:57:17.120407 kernel: dca service started, version 1.12.1 Apr 24 23:57:17.120422 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 24 23:57:17.120448 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 24 23:57:17.120463 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:57:17.120484 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:57:17.120498 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:57:17.120511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:57:17.120525 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:57:17.120545 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:57:17.120558 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:57:17.120571 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:57:17.120621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:57:17.120638 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:57:17.120651 kernel: ACPI: Interpreter enabled Apr 24 23:57:17.120665 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:57:17.120677 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:57:17.120692 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:57:17.120706 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 24 23:57:17.120720 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 24 23:57:17.120734 kernel: iommu: Default domain type: Translated Apr 24 23:57:17.120748 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:57:17.120761 kernel: efivars: Registered efivars operations Apr 24 23:57:17.120776 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:57:17.120789 kernel: PCI: System does not support PCI Apr 24 23:57:17.120802 kernel: vgaarb: loaded Apr 24 23:57:17.120816 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 24 23:57:17.120830 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:57:17.120843 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:57:17.120858 kernel: pnp: PnP ACPI init Apr 24 23:57:17.120871 kernel: pnp: PnP ACPI: found 3 devices Apr 24 23:57:17.120883 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:57:17.120898 kernel: NET: Registered PF_INET protocol family Apr 24 23:57:17.120911 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:57:17.120923 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 24 23:57:17.120937 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:57:17.120951 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:57:17.120964 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 24 23:57:17.120977 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 24 23:57:17.120990 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.121004 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.121019 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:57:17.121032 kernel: NET: Registered PF_XDP protocol family Apr 24 23:57:17.121046 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:57:17.121060 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:57:17.121074 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 24 23:57:17.121088 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:57:17.121102 kernel: Initialise system trusted keyrings Apr 24 23:57:17.121116 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 24 23:57:17.121134 kernel: Key type asymmetric registered Apr 24 23:57:17.121148 kernel: Asymmetric key parser 'x509' registered Apr 24 23:57:17.121161 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:57:17.121176 kernel: io scheduler mq-deadline registered Apr 24 23:57:17.121190 kernel: io scheduler kyber registered Apr 24 23:57:17.121203 kernel: io scheduler bfq registered Apr 24 23:57:17.121217 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:57:17.121231 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:57:17.121245 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:57:17.121259 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 24 23:57:17.121277 kernel: i8042: PNP: No PS/2 controller found. Apr 24 23:57:17.121457 kernel: rtc_cmos 00:02: registered as rtc0 Apr 24 23:57:17.121605 kernel: rtc_cmos 00:02: setting system clock to 2026-04-24T23:57:16 UTC (1777075036) Apr 24 23:57:17.121729 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 24 23:57:17.121747 kernel: intel_pstate: CPU model not supported Apr 24 23:57:17.121762 kernel: efifb: probing for efifb Apr 24 23:57:17.121776 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 24 23:57:17.121795 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 24 23:57:17.121809 kernel: efifb: scrolling: redraw Apr 24 23:57:17.121823 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:57:17.121837 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:57:17.121851 kernel: fb0: EFI VGA frame buffer device Apr 24 23:57:17.121865 kernel: pstore: Using crash dump compression: deflate Apr 24 23:57:17.121879 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:57:17.121893 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:57:17.121907 kernel: Segment Routing with IPv6 Apr 24 23:57:17.121925 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:57:17.121938 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:57:17.121953 kernel: Key type dns_resolver registered Apr 24 23:57:17.121966 kernel: IPI shorthand broadcast: enabled Apr 24 23:57:17.121980 kernel: sched_clock: Marking stable (928003400, 56799000)->(1248263800, -263461400) Apr 24 23:57:17.121995 kernel: registered taskstats version 1 Apr 24 23:57:17.122009 kernel: Loading compiled-in X.509 certificates Apr 24 23:57:17.122023 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:57:17.122037 kernel: Key type .fscrypt registered Apr 24 23:57:17.122054 kernel: Key type fscrypt-provisioning registered Apr 24 23:57:17.122067 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:57:17.122081 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:57:17.122095 kernel: ima: No architecture policies found Apr 24 23:57:17.122108 kernel: clk: Disabling unused clocks Apr 24 23:57:17.122123 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:57:17.122137 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:57:17.122151 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:57:17.122164 kernel: Run /init as init process Apr 24 23:57:17.122180 kernel: with arguments: Apr 24 23:57:17.122194 kernel: /init Apr 24 23:57:17.122208 kernel: with environment: Apr 24 23:57:17.122220 kernel: HOME=/ Apr 24 23:57:17.122232 kernel: TERM=linux Apr 24 23:57:17.122249 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:57:17.122265 systemd[1]: Detected virtualization microsoft. Apr 24 23:57:17.122279 systemd[1]: Detected architecture x86-64. Apr 24 23:57:17.122296 systemd[1]: Running in initrd. Apr 24 23:57:17.122310 systemd[1]: No hostname configured, using default hostname. Apr 24 23:57:17.122323 systemd[1]: Hostname set to . Apr 24 23:57:17.122337 systemd[1]: Initializing machine ID from random generator. Apr 24 23:57:17.122352 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:57:17.122365 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:57:17.122379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:57:17.122394 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:57:17.122412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:57:17.122427 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:57:17.122444 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:57:17.122461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:57:17.122476 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:57:17.122491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:57:17.122506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:57:17.122524 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:57:17.122540 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:57:17.122556 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:57:17.122570 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:57:17.122602 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:57:17.122617 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:57:17.122632 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:57:17.122663 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:57:17.122678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:57:17.122695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:57:17.122710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:57:17.122725 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:57:17.122738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:57:17.122753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:57:17.122767 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:57:17.122781 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:57:17.122796 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:57:17.122813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:57:17.122855 systemd-journald[177]: Collecting audit messages is disabled. Apr 24 23:57:17.122887 systemd-journald[177]: Journal started Apr 24 23:57:17.122923 systemd-journald[177]: Runtime Journal (/run/log/journal/7bf8448f6a4d439fb3dc90748c4e7eae) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:57:17.129707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:17.156142 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:57:17.152192 systemd-modules-load[178]: Inserted module 'overlay' Apr 24 23:57:17.152880 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:57:17.159002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:57:17.169441 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:57:17.175029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:17.191782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:57:17.208596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:57:17.209743 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:57:17.222280 kernel: Bridge firewalling registered Apr 24 23:57:17.221693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:57:17.224539 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:57:17.228747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:57:17.236586 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 24 23:57:17.236773 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:17.253227 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:57:17.257804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:57:17.272558 dracut-cmdline[203]: dracut-dracut-053 Apr 24 23:57:17.277177 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:57:17.294498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:57:17.301323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:57:17.305610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:57:17.324250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:57:17.340040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:57:17.379188 systemd-resolved[247]: Positive Trust Anchors: Apr 24 23:57:17.379202 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:57:17.379251 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:57:17.410234 systemd-resolved[247]: Defaulting to hostname 'linux'. Apr 24 23:57:17.420640 kernel: SCSI subsystem initialized Apr 24 23:57:17.411419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:57:17.419695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:57:17.435592 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:57:17.446594 kernel: iscsi: registered transport (tcp) Apr 24 23:57:17.467477 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:57:17.467540 kernel: QLogic iSCSI HBA Driver Apr 24 23:57:17.503136 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:57:17.520749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:57:17.549977 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:57:17.550041 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:57:17.554004 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:57:17.593594 kernel: raid6: avx512x4 gen() 18187 MB/s Apr 24 23:57:17.613591 kernel: raid6: avx512x2 gen() 18354 MB/s Apr 24 23:57:17.632584 kernel: raid6: avx512x1 gen() 18166 MB/s Apr 24 23:57:17.651586 kernel: raid6: avx2x4 gen() 18212 MB/s Apr 24 23:57:17.671590 kernel: raid6: avx2x2 gen() 18096 MB/s Apr 24 23:57:17.693816 kernel: raid6: avx2x1 gen() 13630 MB/s Apr 24 23:57:17.693843 kernel: raid6: using algorithm avx512x2 gen() 18354 MB/s Apr 24 23:57:17.715324 kernel: raid6: .... xor() 31680 MB/s, rmw enabled Apr 24 23:57:17.715354 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:57:17.737595 kernel: xor: automatically using best checksumming function avx Apr 24 23:57:17.885608 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:57:17.895362 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:57:17.906756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:57:17.922106 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 24 23:57:17.926744 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:57:17.937790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:57:17.952746 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Apr 24 23:57:17.977993 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:57:17.988695 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:57:18.030719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:57:18.044725 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:57:18.061307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:57:18.072510 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:57:18.081011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:57:18.085027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:57:18.103699 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:57:18.119689 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:57:18.129306 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:57:18.152591 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:57:18.152650 kernel: AES CTR mode by8 optimization enabled Apr 24 23:57:18.165590 kernel: hv_vmbus: Vmbus version:5.2 Apr 24 23:57:18.169247 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:57:18.172890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:18.186713 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 24 23:57:18.177268 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:57:18.186733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:57:18.195334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:18.199474 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:18.218596 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 24 23:57:18.222695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:18.235209 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:57:18.236265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:57:18.236465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:18.252770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:18.265589 kernel: hv_vmbus: registering driver hv_netvsc Apr 24 23:57:18.271763 kernel: hv_vmbus: registering driver hv_storvsc Apr 24 23:57:18.280473 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 24 23:57:18.280504 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 24 23:57:18.280520 kernel: scsi host1: storvsc_host_t Apr 24 23:57:18.287606 kernel: scsi host0: storvsc_host_t Apr 24 23:57:18.294850 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 24 23:57:18.301641 kernel: PTP clock support registered Apr 24 23:57:18.301675 kernel: hv_vmbus: registering driver hid_hyperv Apr 24 23:57:18.301628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:19.414444 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 24 23:57:19.414669 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 24 23:57:19.414691 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 24 23:57:19.414870 kernel: hv_utils: Registering HyperV Utility Driver Apr 24 23:57:19.414893 kernel: hv_vmbus: registering driver hv_utils Apr 24 23:57:19.414917 kernel: hv_utils: Heartbeat IC version 3.0 Apr 24 23:57:19.414935 kernel: hv_utils: Shutdown IC version 3.2 Apr 24 23:57:19.414952 kernel: hv_utils: TimeSync IC version 4.0 Apr 24 23:57:19.414172 systemd-resolved[247]: Clock change detected. Flushing caches. Apr 24 23:57:19.419868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:57:19.443580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:19.457635 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 24 23:57:19.457953 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:57:19.459439 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 24 23:57:19.471637 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 24 23:57:19.471947 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 24 23:57:19.474166 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 23:57:19.478662 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 24 23:57:19.478929 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 24 23:57:19.490935 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:19.490983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:57:19.494715 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 23:57:19.525343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#216 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:57:19.525572 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: VF slot 1 added Apr 24 23:57:19.532420 kernel: hv_vmbus: registering driver hv_pci Apr 24 23:57:19.537435 kernel: hv_pci 91750f0a-7e65-46f5-a62f-1f643ef0feb9: PCI VMBus probing: Using version 0x10004 Apr 24 23:57:19.547563 kernel: hv_pci 91750f0a-7e65-46f5-a62f-1f643ef0feb9: PCI host bridge to bus 7e65:00 Apr 24 23:57:19.547820 kernel: pci_bus 7e65:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 24 23:57:19.548420 kernel: pci_bus 7e65:00: No busn resource found for root bus, will use [bus 00-ff] Apr 24 23:57:19.556694 kernel: pci 7e65:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 24 23:57:19.562658 kernel: pci 7e65:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:57:19.567471 kernel: pci 7e65:00:02.0: enabling Extended Tags Apr 24 23:57:19.580688 kernel: pci 7e65:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7e65:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 24 23:57:19.588089 kernel: pci_bus 7e65:00: busn_res: [bus 00-ff] end is updated to 00 Apr 24 23:57:19.588348 kernel: pci 7e65:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:57:19.752297 kernel: mlx5_core 7e65:00:02.0: enabling device (0000 -> 0002) Apr 24 23:57:19.757427 kernel: mlx5_core 7e65:00:02.0: firmware version: 14.30.5026 Apr 24 23:57:19.924376 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 24 23:57:19.971429 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (466) Apr 24 23:57:19.986718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 24 23:57:20.013688 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: VF registering: eth1 Apr 24 23:57:20.013884 kernel: mlx5_core 7e65:00:02.0 eth1: joined to eth0 Apr 24 23:57:20.014031 kernel: mlx5_core 7e65:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 24 23:57:20.003988 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 24 23:57:20.027419 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (456) Apr 24 23:57:20.027472 kernel: mlx5_core 7e65:00:02.0 enP32357s1: renamed from eth1 Apr 24 23:57:20.029127 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 24 23:57:20.048627 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:57:20.074424 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:20.084430 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:20.092421 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:20.431771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:57:21.098783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:21.098858 disk-uuid[604]: The operation has completed successfully. Apr 24 23:57:21.187983 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:57:21.188100 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:57:21.213633 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:57:21.223784 sh[717]: Success Apr 24 23:57:21.252449 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:57:21.515967 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:57:21.532527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:57:21.539824 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:57:21.564501 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:57:21.564574 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:21.568770 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:57:21.571980 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:57:21.575017 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:57:21.809286 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:57:21.812771 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:57:21.821975 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:57:21.829554 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:57:21.865522 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:21.865583 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:21.865603 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:57:21.908423 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:57:21.923421 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:21.923509 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:57:21.930067 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:57:21.938751 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:57:21.949585 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:57:21.957685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:57:21.983898 systemd-networkd[901]: lo: Link UP Apr 24 23:57:21.983910 systemd-networkd[901]: lo: Gained carrier Apr 24 23:57:21.986190 systemd-networkd[901]: Enumeration completed Apr 24 23:57:21.986519 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:57:21.987103 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:57:21.987106 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:57:21.990439 systemd[1]: Reached target network.target - Network. Apr 24 23:57:22.064419 kernel: mlx5_core 7e65:00:02.0 enP32357s1: Link up Apr 24 23:57:22.100581 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: Data path switched to VF: enP32357s1 Apr 24 23:57:22.100776 systemd-networkd[901]: enP32357s1: Link UP Apr 24 23:57:22.100906 systemd-networkd[901]: eth0: Link UP Apr 24 23:57:22.101073 systemd-networkd[901]: eth0: Gained carrier Apr 24 23:57:22.101083 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:57:22.106612 systemd-networkd[901]: enP32357s1: Gained carrier Apr 24 23:57:22.144451 systemd-networkd[901]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:57:22.861686 ignition[900]: Ignition 2.19.0 Apr 24 23:57:22.861699 ignition[900]: Stage: fetch-offline Apr 24 23:57:22.861744 ignition[900]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:22.861755 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:22.866891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:57:22.861872 ignition[900]: parsed url from cmdline: "" Apr 24 23:57:22.884525 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:57:22.861877 ignition[900]: no config URL provided Apr 24 23:57:22.861884 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:57:22.861895 ignition[900]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:57:22.861901 ignition[900]: failed to fetch config: resource requires networking Apr 24 23:57:22.864140 ignition[900]: Ignition finished successfully Apr 24 23:57:22.911235 ignition[909]: Ignition 2.19.0 Apr 24 23:57:22.911247 ignition[909]: Stage: fetch Apr 24 23:57:22.911459 ignition[909]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:22.911472 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:22.913226 ignition[909]: parsed url from cmdline: "" Apr 24 23:57:22.913243 ignition[909]: no config URL provided Apr 24 23:57:22.913251 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:57:22.913264 ignition[909]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:57:22.913292 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 24 23:57:23.012414 ignition[909]: GET result: OK Apr 24 23:57:23.012532 ignition[909]: config has been read from IMDS userdata Apr 24 23:57:23.012564 ignition[909]: parsing config with SHA512: f892170ce766e7d26c6849d5bce3d3881810388644924f8fd056b23df27a8bf085121407a1dddd4eaa7756d63425821288b5f60b859cd4620aec571b908aee79 Apr 24 23:57:23.016987 unknown[909]: fetched base config from "system" Apr 24 23:57:23.017503 ignition[909]: fetch: fetch complete Apr 24 23:57:23.016994 unknown[909]: fetched base config from "system" Apr 24 23:57:23.017510 ignition[909]: fetch: fetch passed Apr 24 23:57:23.017002 unknown[909]: fetched user config from "azure" Apr 24 23:57:23.017574 ignition[909]: Ignition finished successfully Apr 24 23:57:23.019279 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:57:23.033750 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:57:23.048002 ignition[915]: Ignition 2.19.0 Apr 24 23:57:23.048007 ignition[915]: Stage: kargs Apr 24 23:57:23.051388 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:57:23.048274 ignition[915]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:23.048288 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:23.049887 ignition[915]: kargs: kargs passed Apr 24 23:57:23.049938 ignition[915]: Ignition finished successfully Apr 24 23:57:23.065581 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:57:23.084966 ignition[921]: Ignition 2.19.0 Apr 24 23:57:23.084978 ignition[921]: Stage: disks Apr 24 23:57:23.088438 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:57:23.085195 ignition[921]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:23.094127 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:57:23.085209 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:23.086074 ignition[921]: disks: disks passed Apr 24 23:57:23.086117 ignition[921]: Ignition finished successfully Apr 24 23:57:23.112513 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:57:23.116186 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:57:23.126090 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:57:23.129360 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:57:23.144819 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:57:23.165706 systemd-networkd[901]: eth0: Gained IPv6LL Apr 24 23:57:23.202566 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 24 23:57:23.208242 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:57:23.217635 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:57:23.309432 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:57:23.309687 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:57:23.315893 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:57:23.353498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:57:23.369422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Apr 24 23:57:23.379441 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:23.379478 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:23.379494 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:57:23.381667 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:57:23.388219 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:57:23.391992 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:57:23.409352 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:57:23.392029 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:57:23.397036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:57:23.420830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:57:23.428269 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:57:23.983593 coreos-metadata[955]: Apr 24 23:57:23.983 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:57:23.990444 coreos-metadata[955]: Apr 24 23:57:23.990 INFO Fetch successful Apr 24 23:57:23.993993 coreos-metadata[955]: Apr 24 23:57:23.990 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:57:24.003233 coreos-metadata[955]: Apr 24 23:57:24.003 INFO Fetch successful Apr 24 23:57:24.030977 coreos-metadata[955]: Apr 24 23:57:24.030 INFO wrote hostname ci-4081.3.6-n-3087b9d021 to /sysroot/etc/hostname Apr 24 23:57:24.036850 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:57:24.097627 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:57:24.133780 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:57:24.154423 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:57:24.159854 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:57:24.940310 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:57:24.954762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:57:24.960551 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:57:24.981043 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:57:24.985265 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:25.006837 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:57:25.018992 ignition[1064]: INFO : Ignition 2.19.0 Apr 24 23:57:25.018992 ignition[1064]: INFO : Stage: mount Apr 24 23:57:25.023795 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:25.023795 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:25.023795 ignition[1064]: INFO : mount: mount passed Apr 24 23:57:25.023795 ignition[1064]: INFO : Ignition finished successfully Apr 24 23:57:25.035558 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:57:25.045538 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:57:25.064603 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:57:25.086421 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Apr 24 23:57:25.091420 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:25.091471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:25.096959 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:57:25.104430 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:57:25.106291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:57:25.132807 ignition[1090]: INFO : Ignition 2.19.0 Apr 24 23:57:25.132807 ignition[1090]: INFO : Stage: files Apr 24 23:57:25.138032 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:25.138032 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:25.138032 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:57:25.148927 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:57:25.148927 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:57:25.238275 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:57:25.242802 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:57:25.242802 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:57:25.238747 unknown[1090]: wrote ssh authorized keys file for user: core Apr 24 23:57:25.293397 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:57:25.299628 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:57:25.326881 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:57:25.373227 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 24 23:57:25.750252 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:57:27.064069 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:27.064069 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:57:27.092415 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:57:27.099106 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:57:27.099106 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: files passed Apr 24 23:57:27.108973 ignition[1090]: INFO : Ignition finished successfully Apr 24 23:57:27.105749 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:57:27.132629 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:57:27.144583 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:57:27.155251 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:57:27.155362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:57:27.172347 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:57:27.177436 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:57:27.182242 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:57:27.192502 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:57:27.200729 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:57:27.213835 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:57:27.238746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:57:27.238861 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:57:27.246091 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:57:27.255951 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:57:27.259053 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:57:27.270615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:57:27.284300 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:57:27.298755 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:57:27.310746 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:57:27.314744 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:57:27.325874 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:57:27.328826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:57:27.328968 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:57:27.335313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:57:27.338743 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:57:27.344567 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:57:27.348075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:57:27.355316 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:57:27.359101 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:57:27.379577 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:57:27.386983 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:57:27.390373 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:57:27.393714 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:57:27.399383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:57:27.399578 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:57:27.415015 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:57:27.418682 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:57:27.418793 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:57:27.429041 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:57:27.439735 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:57:27.439914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:57:27.446117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:57:27.446271 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:57:27.460551 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:57:27.460686 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:57:27.469694 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:57:27.469868 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:57:27.489179 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:57:27.496085 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:57:27.500521 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:57:27.500664 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:57:27.510496 ignition[1143]: INFO : Ignition 2.19.0 Apr 24 23:57:27.510496 ignition[1143]: INFO : Stage: umount Apr 24 23:57:27.510496 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:27.510496 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:27.533871 ignition[1143]: INFO : umount: umount passed Apr 24 23:57:27.533871 ignition[1143]: INFO : Ignition finished successfully Apr 24 23:57:27.523622 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:57:27.523766 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:57:27.539321 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:57:27.539462 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:57:27.556187 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:57:27.556292 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:57:27.557783 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:57:27.557878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:57:27.559050 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:57:27.559090 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:57:27.559589 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:57:27.559622 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:57:27.560266 systemd[1]: Stopped target network.target - Network. Apr 24 23:57:27.560749 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:57:27.560787 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:57:27.561287 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:57:27.561747 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:57:27.578243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:57:27.578339 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:57:27.578894 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:57:27.579432 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:57:27.579500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:57:27.579941 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:57:27.579988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:57:27.580472 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:57:27.580521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:57:27.580972 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:57:27.581009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:57:27.582308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:57:27.582656 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:57:27.584378 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:57:27.617198 systemd-networkd[901]: eth0: DHCPv6 lease lost Apr 24 23:57:27.618809 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:57:27.618930 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:57:27.623175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:57:27.623276 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:57:27.644008 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:57:27.650226 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:57:27.650298 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:57:27.724143 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:57:27.727998 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:57:27.728097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:57:27.732546 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:57:27.732705 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:57:27.756922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:57:27.757034 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:57:27.763700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:57:27.763751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:57:27.763850 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:57:27.763885 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:57:27.764339 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:57:27.764372 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:57:27.774672 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:57:27.774806 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:57:27.781076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:57:27.781146 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:57:27.786994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:57:27.787025 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:57:27.790605 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:57:27.790655 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:57:27.796976 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:57:27.797021 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:57:27.843436 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: Data path switched from VF: enP32357s1 Apr 24 23:57:27.844916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:57:27.844987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:27.861644 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:57:27.865329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:57:27.868732 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:57:27.875855 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:57:27.875904 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:57:27.886343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:57:27.886398 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:57:27.901456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:57:27.901516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:27.908383 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:57:27.908494 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:57:27.911737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:57:27.911815 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:57:27.916006 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:57:27.935361 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:57:27.947601 systemd[1]: Switching root. Apr 24 23:57:28.039109 systemd-journald[177]: Journal stopped Apr 24 23:57:17.117997 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:57:17.118023 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:57:17.118038 kernel: BIOS-provided physical RAM map: Apr 24 23:57:17.118044 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:57:17.118050 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 24 23:57:17.118055 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Apr 24 23:57:17.118066 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Apr 24 23:57:17.118074 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Apr 24 23:57:17.118082 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Apr 24 23:57:17.118090 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Apr 24 23:57:17.118099 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 24 23:57:17.118105 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 24 23:57:17.118111 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 24 23:57:17.118118 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 24 23:57:17.118132 kernel: printk: bootconsole [earlyser0] enabled Apr 24 23:57:17.118139 kernel: NX (Execute Disable) protection: active Apr 24 23:57:17.118146 kernel: APIC: Static calls initialized Apr 24 23:57:17.118157 kernel: efi: EFI v2.7 by Microsoft Apr 24 23:57:17.118165 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f420518 Apr 24 23:57:17.118172 kernel: SMBIOS 3.1.0 present. Apr 24 23:57:17.118181 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/08/2026 Apr 24 23:57:17.118190 kernel: Hypervisor detected: Microsoft Hyper-V Apr 24 23:57:17.118197 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 24 23:57:17.118204 kernel: Hyper-V: Host Build 10.0.26102.1277-1-0 Apr 24 23:57:17.118216 kernel: Hyper-V: Nested features: 0x1e0101 Apr 24 23:57:17.118225 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 24 23:57:17.118231 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 24 23:57:17.118241 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:57:17.118250 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 24 23:57:17.118258 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 24 23:57:17.118270 kernel: tsc: Detected 2593.907 MHz processor Apr 24 23:57:17.118277 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:57:17.118289 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:57:17.118296 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 24 23:57:17.118309 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:57:17.118316 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:57:17.118323 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 24 23:57:17.118330 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 24 23:57:17.118338 kernel: Using GB pages for direct mapping Apr 24 23:57:17.118349 kernel: Secure boot disabled Apr 24 23:57:17.118359 kernel: ACPI: Early table checksum verification disabled Apr 24 23:57:17.118374 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 24 23:57:17.118382 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118389 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118402 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 24 23:57:17.118409 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 24 23:57:17.118417 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118429 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118439 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118448 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118458 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118465 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 24 23:57:17.118473 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 24 23:57:17.118485 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Apr 24 23:57:17.118492 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 24 23:57:17.118501 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 24 23:57:17.118511 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 24 23:57:17.118521 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 24 23:57:17.118531 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 24 23:57:17.118540 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Apr 24 23:57:17.118547 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 24 23:57:17.118557 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:57:17.118566 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:57:17.118582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 24 23:57:17.118593 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 24 23:57:17.118600 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 24 23:57:17.118613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 24 23:57:17.118621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 24 23:57:17.118629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 24 23:57:17.118639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 24 23:57:17.118648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 24 23:57:17.118655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 24 23:57:17.118666 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 24 23:57:17.118675 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 24 23:57:17.118684 kernel: NODE_DATA(0) allocated [mem 0x2bfff8000-0x2bfffdfff] Apr 24 23:57:17.118695 kernel: Zone ranges: Apr 24 23:57:17.118704 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:57:17.118711 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:57:17.118722 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:57:17.118731 kernel: Movable zone start for each node Apr 24 23:57:17.118738 kernel: Early memory node ranges Apr 24 23:57:17.118749 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:57:17.118758 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Apr 24 23:57:17.118772 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Apr 24 23:57:17.118779 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 24 23:57:17.118791 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 24 23:57:17.118801 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 24 23:57:17.118811 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:57:17.118818 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:57:17.118826 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 24 23:57:17.118834 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Apr 24 23:57:17.118845 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 24 23:57:17.118855 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 24 23:57:17.118865 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:57:17.118874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:57:17.118882 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:57:17.118891 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 24 23:57:17.118901 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:57:17.118909 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 24 23:57:17.118916 kernel: Booting paravirtualized kernel on Hyper-V Apr 24 23:57:17.118924 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:57:17.118933 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:57:17.118945 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:57:17.118953 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:57:17.118960 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:57:17.118972 kernel: Hyper-V: PV spinlocks enabled Apr 24 23:57:17.118980 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:57:17.118989 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:57:17.119001 kernel: random: crng init done Apr 24 23:57:17.119013 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 24 23:57:17.119025 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:57:17.119033 kernel: Fallback order for Node 0: 0 Apr 24 23:57:17.119045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Apr 24 23:57:17.119056 kernel: Policy zone: Normal Apr 24 23:57:17.119063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:57:17.119070 kernel: software IO TLB: area num 2. Apr 24 23:57:17.119083 kernel: Memory: 8061204K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 321764K reserved, 0K cma-reserved) Apr 24 23:57:17.119091 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:57:17.119111 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:57:17.119123 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:57:17.119131 kernel: Dynamic Preempt: voluntary Apr 24 23:57:17.119146 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:57:17.119155 kernel: rcu: RCU event tracing is enabled. Apr 24 23:57:17.119163 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:57:17.119176 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:57:17.119184 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:57:17.119192 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:57:17.119207 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:57:17.119215 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:57:17.119227 kernel: Using NULL legacy PIC Apr 24 23:57:17.119235 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 24 23:57:17.119248 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:57:17.119256 kernel: Console: colour dummy device 80x25 Apr 24 23:57:17.119269 kernel: printk: console [tty1] enabled Apr 24 23:57:17.119278 kernel: printk: console [ttyS0] enabled Apr 24 23:57:17.119292 kernel: printk: bootconsole [earlyser0] disabled Apr 24 23:57:17.119305 kernel: ACPI: Core revision 20230628 Apr 24 23:57:17.119318 kernel: Failed to register legacy timer interrupt Apr 24 23:57:17.119332 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:57:17.119345 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 24 23:57:17.119360 kernel: Hyper-V: Using IPI hypercalls Apr 24 23:57:17.119374 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 24 23:57:17.119388 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 24 23:57:17.119403 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 24 23:57:17.119423 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 24 23:57:17.119438 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 24 23:57:17.119454 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 24 23:57:17.119470 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Apr 24 23:57:17.119486 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:57:17.119502 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:57:17.119517 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:57:17.119532 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:57:17.119547 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:57:17.119563 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:57:17.119600 kernel: RETBleed: Vulnerable Apr 24 23:57:17.119616 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:57:17.119632 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:57:17.119648 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:57:17.119663 kernel: active return thunk: its_return_thunk Apr 24 23:57:17.119679 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:57:17.119692 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:57:17.119705 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:57:17.119720 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:57:17.119733 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:57:17.119751 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:57:17.119764 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:57:17.119776 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:57:17.119789 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:57:17.119803 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:57:17.119818 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:57:17.119833 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 23:57:17.119847 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:57:17.119862 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:57:17.119877 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:57:17.119892 kernel: landlock: Up and running. Apr 24 23:57:17.119907 kernel: SELinux: Initializing. Apr 24 23:57:17.119924 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.119940 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.119955 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 24 23:57:17.119970 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:57:17.119985 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:57:17.119999 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:57:17.120013 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:57:17.120028 kernel: signal: max sigframe size: 3632 Apr 24 23:57:17.120042 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:57:17.120059 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:57:17.120074 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:57:17.120089 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:57:17.120105 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:57:17.120120 kernel: .... node #0, CPUs: #1 Apr 24 23:57:17.120135 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 24 23:57:17.120151 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:57:17.120167 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:57:17.120182 kernel: smpboot: Max logical packages: 1 Apr 24 23:57:17.120199 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 24 23:57:17.120213 kernel: devtmpfs: initialized Apr 24 23:57:17.120227 kernel: x86/mm: Memory block size: 128MB Apr 24 23:57:17.120240 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 24 23:57:17.120253 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:57:17.120268 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:57:17.120282 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:57:17.120295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:57:17.120308 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:57:17.120325 kernel: audit: type=2000 audit(1777075035.029:1): state=initialized audit_enabled=0 res=1 Apr 24 23:57:17.120339 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:57:17.120365 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:57:17.120378 kernel: cpuidle: using governor menu Apr 24 23:57:17.120392 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:57:17.120407 kernel: dca service started, version 1.12.1 Apr 24 23:57:17.120422 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Apr 24 23:57:17.120448 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Apr 24 23:57:17.120463 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:57:17.120484 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:57:17.120498 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:57:17.120511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:57:17.120525 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:57:17.120545 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:57:17.120558 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:57:17.120571 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:57:17.120621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:57:17.120638 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:57:17.120651 kernel: ACPI: Interpreter enabled Apr 24 23:57:17.120665 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:57:17.120677 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:57:17.120692 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:57:17.120706 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 24 23:57:17.120720 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 24 23:57:17.120734 kernel: iommu: Default domain type: Translated Apr 24 23:57:17.120748 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:57:17.120761 kernel: efivars: Registered efivars operations Apr 24 23:57:17.120776 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:57:17.120789 kernel: PCI: System does not support PCI Apr 24 23:57:17.120802 kernel: vgaarb: loaded Apr 24 23:57:17.120816 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 24 23:57:17.120830 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:57:17.120843 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:57:17.120858 kernel: pnp: PnP ACPI init Apr 24 23:57:17.120871 kernel: pnp: PnP ACPI: found 3 devices Apr 24 23:57:17.120883 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:57:17.120898 kernel: NET: Registered PF_INET protocol family Apr 24 23:57:17.120911 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:57:17.120923 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 24 23:57:17.120937 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:57:17.120951 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:57:17.120964 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 24 23:57:17.120977 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 24 23:57:17.120990 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.121004 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 24 23:57:17.121019 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:57:17.121032 kernel: NET: Registered PF_XDP protocol family Apr 24 23:57:17.121046 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:57:17.121060 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:57:17.121074 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Apr 24 23:57:17.121088 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:57:17.121102 kernel: Initialise system trusted keyrings Apr 24 23:57:17.121116 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 24 23:57:17.121134 kernel: Key type asymmetric registered Apr 24 23:57:17.121148 kernel: Asymmetric key parser 'x509' registered Apr 24 23:57:17.121161 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:57:17.121176 kernel: io scheduler mq-deadline registered Apr 24 23:57:17.121190 kernel: io scheduler kyber registered Apr 24 23:57:17.121203 kernel: io scheduler bfq registered Apr 24 23:57:17.121217 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:57:17.121231 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:57:17.121245 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:57:17.121259 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 24 23:57:17.121277 kernel: i8042: PNP: No PS/2 controller found. Apr 24 23:57:17.121457 kernel: rtc_cmos 00:02: registered as rtc0 Apr 24 23:57:17.121605 kernel: rtc_cmos 00:02: setting system clock to 2026-04-24T23:57:16 UTC (1777075036) Apr 24 23:57:17.121729 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 24 23:57:17.121747 kernel: intel_pstate: CPU model not supported Apr 24 23:57:17.121762 kernel: efifb: probing for efifb Apr 24 23:57:17.121776 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 24 23:57:17.121795 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 24 23:57:17.121809 kernel: efifb: scrolling: redraw Apr 24 23:57:17.121823 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:57:17.121837 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:57:17.121851 kernel: fb0: EFI VGA frame buffer device Apr 24 23:57:17.121865 kernel: pstore: Using crash dump compression: deflate Apr 24 23:57:17.121879 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:57:17.121893 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:57:17.121907 kernel: Segment Routing with IPv6 Apr 24 23:57:17.121925 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:57:17.121938 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:57:17.121953 kernel: Key type dns_resolver registered Apr 24 23:57:17.121966 kernel: IPI shorthand broadcast: enabled Apr 24 23:57:17.121980 kernel: sched_clock: Marking stable (928003400, 56799000)->(1248263800, -263461400) Apr 24 23:57:17.121995 kernel: registered taskstats version 1 Apr 24 23:57:17.122009 kernel: Loading compiled-in X.509 certificates Apr 24 23:57:17.122023 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:57:17.122037 kernel: Key type .fscrypt registered Apr 24 23:57:17.122054 kernel: Key type fscrypt-provisioning registered Apr 24 23:57:17.122067 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:57:17.122081 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:57:17.122095 kernel: ima: No architecture policies found Apr 24 23:57:17.122108 kernel: clk: Disabling unused clocks Apr 24 23:57:17.122123 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:57:17.122137 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:57:17.122151 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:57:17.122164 kernel: Run /init as init process Apr 24 23:57:17.122180 kernel: with arguments: Apr 24 23:57:17.122194 kernel: /init Apr 24 23:57:17.122208 kernel: with environment: Apr 24 23:57:17.122220 kernel: HOME=/ Apr 24 23:57:17.122232 kernel: TERM=linux Apr 24 23:57:17.122249 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:57:17.122265 systemd[1]: Detected virtualization microsoft. Apr 24 23:57:17.122279 systemd[1]: Detected architecture x86-64. Apr 24 23:57:17.122296 systemd[1]: Running in initrd. Apr 24 23:57:17.122310 systemd[1]: No hostname configured, using default hostname. Apr 24 23:57:17.122323 systemd[1]: Hostname set to . Apr 24 23:57:17.122337 systemd[1]: Initializing machine ID from random generator. Apr 24 23:57:17.122352 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:57:17.122365 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:57:17.122379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:57:17.122394 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:57:17.122412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:57:17.122427 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:57:17.122444 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:57:17.122461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:57:17.122476 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:57:17.122491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:57:17.122506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:57:17.122524 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:57:17.122540 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:57:17.122556 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:57:17.122570 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:57:17.122602 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:57:17.122617 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:57:17.122632 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:57:17.122663 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:57:17.122678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:57:17.122695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:57:17.122710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:57:17.122725 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:57:17.122738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:57:17.122753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:57:17.122767 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:57:17.122781 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:57:17.122796 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:57:17.122813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:57:17.122855 systemd-journald[177]: Collecting audit messages is disabled. Apr 24 23:57:17.122887 systemd-journald[177]: Journal started Apr 24 23:57:17.122923 systemd-journald[177]: Runtime Journal (/run/log/journal/7bf8448f6a4d439fb3dc90748c4e7eae) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:57:17.129707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:17.156142 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:57:17.152192 systemd-modules-load[178]: Inserted module 'overlay' Apr 24 23:57:17.152880 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:57:17.159002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:57:17.169441 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:57:17.175029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:17.191782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:57:17.208596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:57:17.209743 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:57:17.222280 kernel: Bridge firewalling registered Apr 24 23:57:17.221693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:57:17.224539 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:57:17.228747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:57:17.236586 systemd-modules-load[178]: Inserted module 'br_netfilter' Apr 24 23:57:17.236773 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:17.253227 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:57:17.257804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:57:17.272558 dracut-cmdline[203]: dracut-dracut-053 Apr 24 23:57:17.277177 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:57:17.294498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:57:17.301323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:57:17.305610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:57:17.324250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:57:17.340040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:57:17.379188 systemd-resolved[247]: Positive Trust Anchors: Apr 24 23:57:17.379202 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:57:17.379251 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:57:17.410234 systemd-resolved[247]: Defaulting to hostname 'linux'. Apr 24 23:57:17.420640 kernel: SCSI subsystem initialized Apr 24 23:57:17.411419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:57:17.419695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:57:17.435592 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:57:17.446594 kernel: iscsi: registered transport (tcp) Apr 24 23:57:17.467477 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:57:17.467540 kernel: QLogic iSCSI HBA Driver Apr 24 23:57:17.503136 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:57:17.520749 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:57:17.549977 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:57:17.550041 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:57:17.554004 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:57:17.593594 kernel: raid6: avx512x4 gen() 18187 MB/s Apr 24 23:57:17.613591 kernel: raid6: avx512x2 gen() 18354 MB/s Apr 24 23:57:17.632584 kernel: raid6: avx512x1 gen() 18166 MB/s Apr 24 23:57:17.651586 kernel: raid6: avx2x4 gen() 18212 MB/s Apr 24 23:57:17.671590 kernel: raid6: avx2x2 gen() 18096 MB/s Apr 24 23:57:17.693816 kernel: raid6: avx2x1 gen() 13630 MB/s Apr 24 23:57:17.693843 kernel: raid6: using algorithm avx512x2 gen() 18354 MB/s Apr 24 23:57:17.715324 kernel: raid6: .... xor() 31680 MB/s, rmw enabled Apr 24 23:57:17.715354 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:57:17.737595 kernel: xor: automatically using best checksumming function avx Apr 24 23:57:17.885608 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:57:17.895362 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:57:17.906756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:57:17.922106 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 24 23:57:17.926744 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:57:17.937790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:57:17.952746 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Apr 24 23:57:17.977993 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:57:17.988695 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:57:18.030719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:57:18.044725 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:57:18.061307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:57:18.072510 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:57:18.081011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:57:18.085027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:57:18.103699 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:57:18.119689 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:57:18.129306 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:57:18.152591 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:57:18.152650 kernel: AES CTR mode by8 optimization enabled Apr 24 23:57:18.165590 kernel: hv_vmbus: Vmbus version:5.2 Apr 24 23:57:18.169247 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:57:18.172890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:18.186713 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 24 23:57:18.177268 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:57:18.186733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:57:18.195334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:18.199474 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:18.218596 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 24 23:57:18.222695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:18.235209 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:57:18.236265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:57:18.236465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:18.252770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:18.265589 kernel: hv_vmbus: registering driver hv_netvsc Apr 24 23:57:18.271763 kernel: hv_vmbus: registering driver hv_storvsc Apr 24 23:57:18.280473 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 24 23:57:18.280504 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 24 23:57:18.280520 kernel: scsi host1: storvsc_host_t Apr 24 23:57:18.287606 kernel: scsi host0: storvsc_host_t Apr 24 23:57:18.294850 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 24 23:57:18.301641 kernel: PTP clock support registered Apr 24 23:57:18.301675 kernel: hv_vmbus: registering driver hid_hyperv Apr 24 23:57:18.301628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:19.414444 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 24 23:57:19.414669 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 24 23:57:19.414691 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 24 23:57:19.414870 kernel: hv_utils: Registering HyperV Utility Driver Apr 24 23:57:19.414893 kernel: hv_vmbus: registering driver hv_utils Apr 24 23:57:19.414917 kernel: hv_utils: Heartbeat IC version 3.0 Apr 24 23:57:19.414935 kernel: hv_utils: Shutdown IC version 3.2 Apr 24 23:57:19.414952 kernel: hv_utils: TimeSync IC version 4.0 Apr 24 23:57:19.414172 systemd-resolved[247]: Clock change detected. Flushing caches. Apr 24 23:57:19.419868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:57:19.443580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:19.457635 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 24 23:57:19.457953 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:57:19.459439 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 24 23:57:19.471637 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 24 23:57:19.471947 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 24 23:57:19.474166 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 23:57:19.478662 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 24 23:57:19.478929 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 24 23:57:19.490935 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:19.490983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:57:19.494715 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 23:57:19.525343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#216 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:57:19.525572 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: VF slot 1 added Apr 24 23:57:19.532420 kernel: hv_vmbus: registering driver hv_pci Apr 24 23:57:19.537435 kernel: hv_pci 91750f0a-7e65-46f5-a62f-1f643ef0feb9: PCI VMBus probing: Using version 0x10004 Apr 24 23:57:19.547563 kernel: hv_pci 91750f0a-7e65-46f5-a62f-1f643ef0feb9: PCI host bridge to bus 7e65:00 Apr 24 23:57:19.547820 kernel: pci_bus 7e65:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 24 23:57:19.548420 kernel: pci_bus 7e65:00: No busn resource found for root bus, will use [bus 00-ff] Apr 24 23:57:19.556694 kernel: pci 7e65:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 24 23:57:19.562658 kernel: pci 7e65:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:57:19.567471 kernel: pci 7e65:00:02.0: enabling Extended Tags Apr 24 23:57:19.580688 kernel: pci 7e65:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7e65:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 24 23:57:19.588089 kernel: pci_bus 7e65:00: busn_res: [bus 00-ff] end is updated to 00 Apr 24 23:57:19.588348 kernel: pci 7e65:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 24 23:57:19.752297 kernel: mlx5_core 7e65:00:02.0: enabling device (0000 -> 0002) Apr 24 23:57:19.757427 kernel: mlx5_core 7e65:00:02.0: firmware version: 14.30.5026 Apr 24 23:57:19.924376 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 24 23:57:19.971429 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (466) Apr 24 23:57:19.986718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 24 23:57:20.013688 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: VF registering: eth1 Apr 24 23:57:20.013884 kernel: mlx5_core 7e65:00:02.0 eth1: joined to eth0 Apr 24 23:57:20.014031 kernel: mlx5_core 7e65:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 24 23:57:20.003988 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 24 23:57:20.027419 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (456) Apr 24 23:57:20.027472 kernel: mlx5_core 7e65:00:02.0 enP32357s1: renamed from eth1 Apr 24 23:57:20.029127 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 24 23:57:20.048627 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:57:20.074424 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:20.084430 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:20.092421 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:20.431771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:57:21.098783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:57:21.098858 disk-uuid[604]: The operation has completed successfully. Apr 24 23:57:21.187983 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:57:21.188100 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:57:21.213633 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:57:21.223784 sh[717]: Success Apr 24 23:57:21.252449 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:57:21.515967 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:57:21.532527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:57:21.539824 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:57:21.564501 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:57:21.564574 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:21.568770 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:57:21.571980 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:57:21.575017 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:57:21.809286 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:57:21.812771 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:57:21.821975 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:57:21.829554 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:57:21.865522 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:21.865583 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:21.865603 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:57:21.908423 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:57:21.923421 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:21.923509 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:57:21.930067 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:57:21.938751 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:57:21.949585 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:57:21.957685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:57:21.983898 systemd-networkd[901]: lo: Link UP Apr 24 23:57:21.983910 systemd-networkd[901]: lo: Gained carrier Apr 24 23:57:21.986190 systemd-networkd[901]: Enumeration completed Apr 24 23:57:21.986519 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:57:21.987103 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:57:21.987106 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:57:21.990439 systemd[1]: Reached target network.target - Network. Apr 24 23:57:22.064419 kernel: mlx5_core 7e65:00:02.0 enP32357s1: Link up Apr 24 23:57:22.100581 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: Data path switched to VF: enP32357s1 Apr 24 23:57:22.100776 systemd-networkd[901]: enP32357s1: Link UP Apr 24 23:57:22.100906 systemd-networkd[901]: eth0: Link UP Apr 24 23:57:22.101073 systemd-networkd[901]: eth0: Gained carrier Apr 24 23:57:22.101083 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:57:22.106612 systemd-networkd[901]: enP32357s1: Gained carrier Apr 24 23:57:22.144451 systemd-networkd[901]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:57:22.861686 ignition[900]: Ignition 2.19.0 Apr 24 23:57:22.861699 ignition[900]: Stage: fetch-offline Apr 24 23:57:22.861744 ignition[900]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:22.861755 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:22.866891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:57:22.861872 ignition[900]: parsed url from cmdline: "" Apr 24 23:57:22.884525 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:57:22.861877 ignition[900]: no config URL provided Apr 24 23:57:22.861884 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:57:22.861895 ignition[900]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:57:22.861901 ignition[900]: failed to fetch config: resource requires networking Apr 24 23:57:22.864140 ignition[900]: Ignition finished successfully Apr 24 23:57:22.911235 ignition[909]: Ignition 2.19.0 Apr 24 23:57:22.911247 ignition[909]: Stage: fetch Apr 24 23:57:22.911459 ignition[909]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:22.911472 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:22.913226 ignition[909]: parsed url from cmdline: "" Apr 24 23:57:22.913243 ignition[909]: no config URL provided Apr 24 23:57:22.913251 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:57:22.913264 ignition[909]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:57:22.913292 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 24 23:57:23.012414 ignition[909]: GET result: OK Apr 24 23:57:23.012532 ignition[909]: config has been read from IMDS userdata Apr 24 23:57:23.012564 ignition[909]: parsing config with SHA512: f892170ce766e7d26c6849d5bce3d3881810388644924f8fd056b23df27a8bf085121407a1dddd4eaa7756d63425821288b5f60b859cd4620aec571b908aee79 Apr 24 23:57:23.016987 unknown[909]: fetched base config from "system" Apr 24 23:57:23.017503 ignition[909]: fetch: fetch complete Apr 24 23:57:23.016994 unknown[909]: fetched base config from "system" Apr 24 23:57:23.017510 ignition[909]: fetch: fetch passed Apr 24 23:57:23.017002 unknown[909]: fetched user config from "azure" Apr 24 23:57:23.017574 ignition[909]: Ignition finished successfully Apr 24 23:57:23.019279 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:57:23.033750 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:57:23.048002 ignition[915]: Ignition 2.19.0 Apr 24 23:57:23.048007 ignition[915]: Stage: kargs Apr 24 23:57:23.051388 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:57:23.048274 ignition[915]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:23.048288 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:23.049887 ignition[915]: kargs: kargs passed Apr 24 23:57:23.049938 ignition[915]: Ignition finished successfully Apr 24 23:57:23.065581 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:57:23.084966 ignition[921]: Ignition 2.19.0 Apr 24 23:57:23.084978 ignition[921]: Stage: disks Apr 24 23:57:23.088438 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:57:23.085195 ignition[921]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:23.094127 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:57:23.085209 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:23.086074 ignition[921]: disks: disks passed Apr 24 23:57:23.086117 ignition[921]: Ignition finished successfully Apr 24 23:57:23.112513 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:57:23.116186 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:57:23.126090 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:57:23.129360 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:57:23.144819 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:57:23.165706 systemd-networkd[901]: eth0: Gained IPv6LL Apr 24 23:57:23.202566 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 24 23:57:23.208242 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:57:23.217635 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:57:23.309432 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:57:23.309687 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:57:23.315893 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:57:23.353498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:57:23.369422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Apr 24 23:57:23.379441 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:23.379478 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:23.379494 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:57:23.381667 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:57:23.388219 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:57:23.391992 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:57:23.409352 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:57:23.392029 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:57:23.397036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:57:23.420830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:57:23.428269 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:57:23.983593 coreos-metadata[955]: Apr 24 23:57:23.983 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:57:23.990444 coreos-metadata[955]: Apr 24 23:57:23.990 INFO Fetch successful Apr 24 23:57:23.993993 coreos-metadata[955]: Apr 24 23:57:23.990 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:57:24.003233 coreos-metadata[955]: Apr 24 23:57:24.003 INFO Fetch successful Apr 24 23:57:24.030977 coreos-metadata[955]: Apr 24 23:57:24.030 INFO wrote hostname ci-4081.3.6-n-3087b9d021 to /sysroot/etc/hostname Apr 24 23:57:24.036850 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:57:24.097627 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:57:24.133780 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:57:24.154423 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:57:24.159854 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:57:24.940310 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:57:24.954762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:57:24.960551 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:57:24.981043 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:57:24.985265 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:25.006837 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:57:25.018992 ignition[1064]: INFO : Ignition 2.19.0 Apr 24 23:57:25.018992 ignition[1064]: INFO : Stage: mount Apr 24 23:57:25.023795 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:25.023795 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:25.023795 ignition[1064]: INFO : mount: mount passed Apr 24 23:57:25.023795 ignition[1064]: INFO : Ignition finished successfully Apr 24 23:57:25.035558 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:57:25.045538 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:57:25.064603 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:57:25.086421 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Apr 24 23:57:25.091420 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:57:25.091471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:57:25.096959 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:57:25.104430 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:57:25.106291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:57:25.132807 ignition[1090]: INFO : Ignition 2.19.0 Apr 24 23:57:25.132807 ignition[1090]: INFO : Stage: files Apr 24 23:57:25.138032 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:25.138032 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:25.138032 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:57:25.148927 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:57:25.148927 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:57:25.238275 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:57:25.242802 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:57:25.242802 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:57:25.238747 unknown[1090]: wrote ssh authorized keys file for user: core Apr 24 23:57:25.293397 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:57:25.299628 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:57:25.326881 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:57:25.373227 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:25.379389 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 24 23:57:25.750252 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:57:27.064069 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 23:57:27.064069 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:57:27.092415 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:57:27.099106 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:57:27.099106 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:57:27.108973 ignition[1090]: INFO : files: files passed Apr 24 23:57:27.108973 ignition[1090]: INFO : Ignition finished successfully Apr 24 23:57:27.105749 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:57:27.132629 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:57:27.144583 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:57:27.155251 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:57:27.155362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:57:27.172347 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:57:27.177436 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:57:27.182242 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:57:27.192502 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:57:27.200729 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:57:27.213835 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:57:27.238746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:57:27.238861 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:57:27.246091 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:57:27.255951 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:57:27.259053 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:57:27.270615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:57:27.284300 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:57:27.298755 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:57:27.310746 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:57:27.314744 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:57:27.325874 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:57:27.328826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:57:27.328968 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:57:27.335313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:57:27.338743 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:57:27.344567 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:57:27.348075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:57:27.355316 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:57:27.359101 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:57:27.379577 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:57:27.386983 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:57:27.390373 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:57:27.393714 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:57:27.399383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:57:27.399578 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:57:27.415015 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:57:27.418682 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:57:27.418793 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:57:27.429041 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:57:27.439735 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:57:27.439914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:57:27.446117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:57:27.446271 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:57:27.460551 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:57:27.460686 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:57:27.469694 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:57:27.469868 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:57:27.489179 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:57:27.496085 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:57:27.500521 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:57:27.500664 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:57:27.510496 ignition[1143]: INFO : Ignition 2.19.0 Apr 24 23:57:27.510496 ignition[1143]: INFO : Stage: umount Apr 24 23:57:27.510496 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:57:27.510496 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 24 23:57:27.533871 ignition[1143]: INFO : umount: umount passed Apr 24 23:57:27.533871 ignition[1143]: INFO : Ignition finished successfully Apr 24 23:57:27.523622 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:57:27.523766 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:57:27.539321 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:57:27.539462 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:57:27.556187 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:57:27.556292 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:57:27.557783 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:57:27.557878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:57:27.559050 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:57:27.559090 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:57:27.559589 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:57:27.559622 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:57:27.560266 systemd[1]: Stopped target network.target - Network. Apr 24 23:57:27.560749 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:57:27.560787 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:57:27.561287 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:57:27.561747 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:57:27.578243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:57:27.578339 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:57:27.578894 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:57:27.579432 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:57:27.579500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:57:27.579941 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:57:27.579988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:57:27.580472 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:57:27.580521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:57:27.580972 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:57:27.581009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:57:27.582308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:57:27.582656 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:57:27.584378 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:57:27.617198 systemd-networkd[901]: eth0: DHCPv6 lease lost Apr 24 23:57:27.618809 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:57:27.618930 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:57:27.623175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:57:27.623276 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:57:27.644008 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:57:27.650226 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:57:27.650298 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:57:27.724143 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:57:27.727998 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:57:27.728097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:57:27.732546 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:57:27.732705 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:57:27.756922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:57:27.757034 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:57:27.763700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:57:27.763751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:57:27.763850 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:57:27.763885 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:57:27.764339 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:57:27.764372 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:57:27.774672 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:57:27.774806 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:57:27.781076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:57:27.781146 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:57:27.786994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:57:27.787025 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:57:27.790605 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:57:27.790655 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:57:27.796976 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:57:27.797021 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:57:27.843436 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: Data path switched from VF: enP32357s1 Apr 24 23:57:27.844916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:57:27.844987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:57:27.861644 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:57:27.865329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:57:27.868732 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:57:27.875855 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:57:27.875904 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:57:27.886343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:57:27.886398 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:57:27.901456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:57:27.901516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:27.908383 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:57:27.908494 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:57:27.911737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:57:27.911815 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:57:27.916006 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:57:27.935361 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:57:27.947601 systemd[1]: Switching root. Apr 24 23:57:28.039109 systemd-journald[177]: Journal stopped Apr 24 23:57:33.534361 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 24 23:57:33.534417 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:57:33.534442 kernel: SELinux: policy capability open_perms=1 Apr 24 23:57:33.534457 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:57:33.534470 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:57:33.534484 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:57:33.534499 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:57:33.534512 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:57:33.534529 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:57:33.534543 kernel: audit: type=1403 audit(1777075050.022:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:57:33.534560 systemd[1]: Successfully loaded SELinux policy in 147.399ms. Apr 24 23:57:33.534578 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.871ms. Apr 24 23:57:33.534595 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:57:33.534611 systemd[1]: Detected virtualization microsoft. Apr 24 23:57:33.534631 systemd[1]: Detected architecture x86-64. Apr 24 23:57:33.534647 systemd[1]: Detected first boot. Apr 24 23:57:33.534665 systemd[1]: Hostname set to . Apr 24 23:57:33.534682 systemd[1]: Initializing machine ID from random generator. Apr 24 23:57:33.534701 zram_generator::config[1185]: No configuration found. Apr 24 23:57:33.534723 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:57:33.534740 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:57:33.534759 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:57:33.534777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:57:33.534795 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:57:33.534813 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:57:33.534832 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:57:33.534854 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:57:33.534873 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:57:33.534891 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:57:33.534909 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:57:33.534927 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:57:33.534946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:57:33.534964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:57:33.534983 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:57:33.535004 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:57:33.535023 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:57:33.535041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:57:33.535059 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:57:33.535081 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:57:33.535099 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:57:33.535122 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:57:33.535142 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:57:33.535160 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:57:33.535183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:57:33.535202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:57:33.535221 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:57:33.535239 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:57:33.535257 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:57:33.535275 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:57:33.535294 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:57:33.535316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:57:33.535336 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:57:33.535354 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:57:33.535374 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:57:33.535393 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:57:33.535568 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:57:33.535588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:57:33.535607 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:57:33.535625 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:57:33.535645 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:57:33.535664 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:57:33.535683 systemd[1]: Reached target machines.target - Containers. Apr 24 23:57:33.535701 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:57:33.535723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:57:33.535742 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:57:33.535760 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:57:33.535779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:57:33.535797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:57:33.535815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:57:33.535833 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:57:33.535852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:57:33.535873 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:57:33.535891 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:57:33.535909 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:57:33.535928 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:57:33.535946 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:57:33.535964 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:57:33.535982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:57:33.536000 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:57:33.536041 systemd-journald[1291]: Collecting audit messages is disabled. Apr 24 23:57:33.536084 systemd-journald[1291]: Journal started Apr 24 23:57:33.536119 systemd-journald[1291]: Runtime Journal (/run/log/journal/107959f228074631b443b374a4fe32b8) is 8.0M, max 158.7M, 150.7M free. Apr 24 23:57:33.546269 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:57:32.778861 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:57:32.897938 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 23:57:32.898350 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:57:33.571723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:57:33.578423 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:57:33.585191 kernel: fuse: init (API version 7.39) Apr 24 23:57:33.585232 systemd[1]: Stopped verity-setup.service. Apr 24 23:57:33.597697 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:57:33.609125 kernel: loop: module loaded Apr 24 23:57:33.609179 kernel: ACPI: bus type drm_connector registered Apr 24 23:57:33.609199 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:57:33.614005 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:57:33.617486 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:57:33.620852 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:57:33.623967 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:57:33.627472 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:57:33.631248 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:57:33.634666 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:57:33.638899 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:57:33.643373 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:57:33.643589 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:57:33.647637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:57:33.647802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:57:33.651841 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:57:33.652003 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:57:33.655643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:57:33.655770 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:57:33.659993 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:57:33.660154 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:57:33.663800 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:57:33.663961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:57:33.667940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:57:33.672327 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:57:33.676908 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:57:33.694151 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:57:33.706996 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:57:33.719465 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:57:33.723633 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:57:33.723679 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:57:33.730964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:57:33.743119 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:57:33.751595 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:57:33.755010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:57:33.758232 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:57:33.764111 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:57:33.768461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:57:33.774345 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:57:33.778332 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:57:33.779218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:57:33.785803 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:57:33.792270 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:57:33.799804 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:57:33.810866 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:57:33.815264 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:57:33.821224 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:57:33.825748 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:57:33.827163 systemd-journald[1291]: Time spent on flushing to /var/log/journal/107959f228074631b443b374a4fe32b8 is 38.822ms for 959 entries. Apr 24 23:57:33.827163 systemd-journald[1291]: System Journal (/var/log/journal/107959f228074631b443b374a4fe32b8) is 8.0M, max 2.6G, 2.6G free. Apr 24 23:57:33.893066 systemd-journald[1291]: Received client request to flush runtime journal. Apr 24 23:57:33.893130 kernel: loop0: detected capacity change from 0 to 219192 Apr 24 23:57:33.835777 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:57:33.849600 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:57:33.858574 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:57:33.893028 udevadm[1331]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:57:33.894309 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:57:34.015778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:57:34.018678 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:57:34.025046 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Apr 24 23:57:34.025087 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Apr 24 23:57:34.041429 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:57:34.043860 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:57:34.058645 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:57:34.062260 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:57:34.092449 kernel: loop1: detected capacity change from 0 to 31056 Apr 24 23:57:34.176874 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:57:34.190641 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:57:34.217307 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Apr 24 23:57:34.217661 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Apr 24 23:57:34.223651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:57:34.458438 kernel: loop2: detected capacity change from 0 to 142488 Apr 24 23:57:34.762213 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:57:34.775857 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:57:34.799740 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Apr 24 23:57:35.035194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:57:35.049584 kernel: loop3: detected capacity change from 0 to 140768 Apr 24 23:57:35.051372 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:57:35.121577 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:57:35.174849 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:57:35.205864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 24 23:57:35.216315 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:57:35.236201 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:57:35.310507 kernel: hv_vmbus: registering driver hv_balloon Apr 24 23:57:35.320516 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 24 23:57:35.337452 kernel: hv_vmbus: registering driver hyperv_fb Apr 24 23:57:35.341703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:35.356422 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 24 23:57:35.363509 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 24 23:57:35.372306 kernel: Console: switching to colour dummy device 80x25 Apr 24 23:57:35.380728 kernel: Console: switching to colour frame buffer device 128x48 Apr 24 23:57:35.418992 systemd-networkd[1354]: lo: Link UP Apr 24 23:57:35.420938 systemd-networkd[1354]: lo: Gained carrier Apr 24 23:57:35.520487 systemd-networkd[1354]: Enumeration completed Apr 24 23:57:35.521340 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:57:35.531847 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1362) Apr 24 23:57:35.526105 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:57:35.531376 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:57:35.536197 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:57:35.545664 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:57:35.546655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:35.564580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:57:35.586419 kernel: loop4: detected capacity change from 0 to 219192 Apr 24 23:57:35.615643 kernel: mlx5_core 7e65:00:02.0 enP32357s1: Link up Apr 24 23:57:35.615979 kernel: loop5: detected capacity change from 0 to 31056 Apr 24 23:57:35.638635 kernel: hv_netvsc 7c1e5236-befc-7c1e-5236-befc7c1e5236 eth0: Data path switched to VF: enP32357s1 Apr 24 23:57:35.638901 kernel: loop6: detected capacity change from 0 to 142488 Apr 24 23:57:35.641962 systemd-networkd[1354]: enP32357s1: Link UP Apr 24 23:57:35.642929 systemd-networkd[1354]: eth0: Link UP Apr 24 23:57:35.643019 systemd-networkd[1354]: eth0: Gained carrier Apr 24 23:57:35.643095 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:57:35.649768 systemd-networkd[1354]: enP32357s1: Gained carrier Apr 24 23:57:35.669446 kernel: loop7: detected capacity change from 0 to 140768 Apr 24 23:57:35.677433 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 24 23:57:35.709706 (sd-merge)[1415]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 24 23:57:35.710807 (sd-merge)[1415]: Merged extensions into '/usr'. Apr 24 23:57:35.738762 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:57:35.738909 systemd[1]: Reloading... Apr 24 23:57:35.809500 systemd-networkd[1354]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:57:35.857447 zram_generator::config[1470]: No configuration found. Apr 24 23:57:36.022346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:57:36.104538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 24 23:57:36.109483 systemd[1]: Reloading finished in 367 ms. Apr 24 23:57:36.140244 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:57:36.145001 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:57:36.149911 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:57:36.170567 systemd[1]: Starting ensure-sysext.service... Apr 24 23:57:36.176595 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:57:36.186571 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:57:36.191763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:57:36.198151 systemd[1]: Reloading requested from client PID 1533 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:57:36.198270 systemd[1]: Reloading... Apr 24 23:57:36.240520 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:57:36.241447 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:57:36.242792 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:57:36.243223 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Apr 24 23:57:36.243315 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Apr 24 23:57:36.247769 lvm[1534]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:57:36.288671 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:57:36.288686 systemd-tmpfiles[1536]: Skipping /boot Apr 24 23:57:36.295534 zram_generator::config[1565]: No configuration found. Apr 24 23:57:36.309691 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:57:36.309828 systemd-tmpfiles[1536]: Skipping /boot Apr 24 23:57:36.450175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:57:36.528858 systemd[1]: Reloading finished in 329 ms. Apr 24 23:57:36.553916 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:57:36.558582 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:57:36.562934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:57:36.573280 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:57:36.581680 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:57:36.607565 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:57:36.612825 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:57:36.618695 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:57:36.628789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:57:36.634540 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:57:36.641708 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:57:36.649496 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:57:36.649767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:57:36.657745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:57:36.663713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:57:36.675164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:57:36.680885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:57:36.681060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:57:36.684474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:57:36.684654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:57:36.692842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:57:36.698298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:57:36.698535 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:57:36.707044 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:57:36.712393 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:57:36.712693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:57:36.731386 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:57:36.732837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:57:36.738497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:57:36.744709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:57:36.755000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:57:36.765708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:57:36.769220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:57:36.769491 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:57:36.777162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:57:36.778857 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:57:36.783877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:57:36.784082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:57:36.787777 augenrules[1662]: No rules Apr 24 23:57:36.789641 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:57:36.793869 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:57:36.794059 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:57:36.797715 systemd-networkd[1354]: eth0: Gained IPv6LL Apr 24 23:57:36.799478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:57:36.799649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:57:36.805546 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:57:36.810059 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:57:36.810956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:57:36.815017 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:57:36.822269 systemd[1]: Finished ensure-sysext.service. Apr 24 23:57:36.833692 systemd-resolved[1641]: Positive Trust Anchors: Apr 24 23:57:36.833714 systemd-resolved[1641]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:57:36.833767 systemd-resolved[1641]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:57:36.836197 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:57:36.836272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:57:36.863383 systemd-resolved[1641]: Using system hostname 'ci-4081.3.6-n-3087b9d021'. Apr 24 23:57:36.865553 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:57:36.869436 systemd[1]: Reached target network.target - Network. Apr 24 23:57:36.872254 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:57:36.875622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:57:37.289316 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:57:37.293933 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:57:39.617513 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:57:39.630683 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:57:39.639598 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:57:39.665678 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:57:39.669642 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:57:39.673308 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:57:39.677205 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:57:39.681312 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:57:39.684642 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:57:39.692191 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:57:39.695970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:57:39.696001 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:57:39.698986 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:57:39.702823 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:57:39.707729 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:57:39.715488 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:57:39.719360 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:57:39.723344 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:57:39.726491 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:57:39.729636 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:57:39.729672 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:57:39.731935 systemd[1]: Starting chronyd.service - NTP client/server... Apr 24 23:57:39.736519 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:57:39.750587 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:57:39.758663 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:57:39.767588 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:57:39.775307 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:57:39.778747 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:57:39.778798 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 24 23:57:39.779750 jq[1689]: false Apr 24 23:57:39.782948 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 24 23:57:39.786914 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 24 23:57:39.789438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:57:39.797355 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:57:39.806596 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:57:39.814559 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:57:39.822175 (chronyd)[1683]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 24 23:57:39.823555 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:57:39.839796 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:57:39.856262 KVP[1691]: KVP starting; pid is:1691 Apr 24 23:57:39.856608 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:57:39.858497 chronyd[1706]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 24 23:57:39.864808 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:57:39.865321 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:57:39.869607 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:57:39.876535 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:57:39.886675 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:57:39.888639 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:57:39.894607 kernel: hv_utils: KVP IC version 4.0 Apr 24 23:57:39.894834 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:57:39.895046 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:57:39.896431 KVP[1691]: KVP LIC Version: 3.1 Apr 24 23:57:39.899297 chronyd[1706]: Timezone right/UTC failed leap second check, ignoring Apr 24 23:57:39.899881 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:57:39.899685 chronyd[1706]: Loaded seccomp filter (level 2) Apr 24 23:57:39.900096 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:57:39.910362 systemd[1]: Started chronyd.service - NTP client/server. Apr 24 23:57:39.919168 extend-filesystems[1690]: Found loop4 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found loop5 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found loop6 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found loop7 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda1 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda2 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda3 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found usr Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda4 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda6 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda7 Apr 24 23:57:39.919168 extend-filesystems[1690]: Found sda9 Apr 24 23:57:40.059260 extend-filesystems[1690]: Checking size of /dev/sda9 Apr 24 23:57:40.059260 extend-filesystems[1690]: Old size kept for /dev/sda9 Apr 24 23:57:40.059260 extend-filesystems[1690]: Found sr0 Apr 24 23:57:40.066236 update_engine[1710]: I20260424 23:57:39.961043 1710 main.cc:92] Flatcar Update Engine starting Apr 24 23:57:40.066236 update_engine[1710]: I20260424 23:57:39.992740 1710 update_check_scheduler.cc:74] Next update check in 6m24s Apr 24 23:57:39.933285 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:57:39.929496 dbus-daemon[1686]: [system] SELinux support is enabled Apr 24 23:57:40.068418 tar[1717]: linux-amd64/LICENSE Apr 24 23:57:40.068418 tar[1717]: linux-amd64/helm Apr 24 23:57:39.949808 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:57:40.068800 jq[1712]: true Apr 24 23:57:39.949878 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:57:39.958017 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:57:40.069131 jq[1735]: true Apr 24 23:57:39.958047 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:57:39.977941 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:57:39.989854 (ntainerd)[1719]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:57:39.997168 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:57:40.036561 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:57:40.037841 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:57:40.055847 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:57:40.098894 coreos-metadata[1685]: Apr 24 23:57:40.097 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 24 23:57:40.100678 coreos-metadata[1685]: Apr 24 23:57:40.100 INFO Fetch successful Apr 24 23:57:40.100678 coreos-metadata[1685]: Apr 24 23:57:40.100 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 24 23:57:40.106036 coreos-metadata[1685]: Apr 24 23:57:40.106 INFO Fetch successful Apr 24 23:57:40.106130 coreos-metadata[1685]: Apr 24 23:57:40.106 INFO Fetching http://168.63.129.16/machine/4cf3f6fc-1c86-4cf3-bde2-6aa4fc2d4c1b/f78c4924%2D440f%2D4159%2Da343%2D8df96a604d37.%5Fci%2D4081.3.6%2Dn%2D3087b9d021?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 24 23:57:40.110788 coreos-metadata[1685]: Apr 24 23:57:40.110 INFO Fetch successful Apr 24 23:57:40.111970 coreos-metadata[1685]: Apr 24 23:57:40.110 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 24 23:57:40.125471 coreos-metadata[1685]: Apr 24 23:57:40.121 INFO Fetch successful Apr 24 23:57:40.154063 systemd-logind[1702]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:57:40.158532 systemd-logind[1702]: New seat seat0. Apr 24 23:57:40.161849 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:57:40.189222 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:57:40.199336 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:57:40.269428 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1744) Apr 24 23:57:40.329986 bash[1769]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:57:40.333646 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:57:40.343882 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 23:57:40.508324 locksmithd[1741]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:57:40.605058 sshd_keygen[1736]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:57:40.659301 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:57:40.676807 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:57:40.689632 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 24 23:57:40.702612 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:57:40.702868 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:57:40.714167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:57:40.732389 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 24 23:57:40.757219 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:57:40.771440 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:57:40.782725 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:57:40.787098 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:57:41.101089 containerd[1719]: time="2026-04-24T23:57:41.100940300Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:57:41.137014 tar[1717]: linux-amd64/README.md Apr 24 23:57:41.154532 containerd[1719]: time="2026-04-24T23:57:41.154485600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:57:41.156674 containerd[1719]: time="2026-04-24T23:57:41.156631500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:57:41.156786 containerd[1719]: time="2026-04-24T23:57:41.156770400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:57:41.156871 containerd[1719]: time="2026-04-24T23:57:41.156857100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:57:41.157092 containerd[1719]: time="2026-04-24T23:57:41.157075000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:57:41.157178 containerd[1719]: time="2026-04-24T23:57:41.157163500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:57:41.157309 containerd[1719]: time="2026-04-24T23:57:41.157291400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:57:41.157378 containerd[1719]: time="2026-04-24T23:57:41.157365000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:57:41.158272 containerd[1719]: time="2026-04-24T23:57:41.158070700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:57:41.158272 containerd[1719]: time="2026-04-24T23:57:41.158097700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:57:41.158272 containerd[1719]: time="2026-04-24T23:57:41.158117700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:57:41.158272 containerd[1719]: time="2026-04-24T23:57:41.158132800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:57:41.159469 containerd[1719]: time="2026-04-24T23:57:41.158540100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:57:41.159469 containerd[1719]: time="2026-04-24T23:57:41.158803900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:57:41.159771 containerd[1719]: time="2026-04-24T23:57:41.159743500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:57:41.160265 containerd[1719]: time="2026-04-24T23:57:41.160232000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:57:41.160381 containerd[1719]: time="2026-04-24T23:57:41.160355800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:57:41.162491 containerd[1719]: time="2026-04-24T23:57:41.160441400Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:57:41.165885 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:57:41.180971 containerd[1719]: time="2026-04-24T23:57:41.180935400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:57:41.181053 containerd[1719]: time="2026-04-24T23:57:41.181027900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:57:41.181093 containerd[1719]: time="2026-04-24T23:57:41.181057100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:57:41.181130 containerd[1719]: time="2026-04-24T23:57:41.181117900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:57:41.181182 containerd[1719]: time="2026-04-24T23:57:41.181142900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:57:41.181331 containerd[1719]: time="2026-04-24T23:57:41.181288700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:57:41.181692 containerd[1719]: time="2026-04-24T23:57:41.181660400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:57:41.181859 containerd[1719]: time="2026-04-24T23:57:41.181797800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:57:41.181859 containerd[1719]: time="2026-04-24T23:57:41.181829300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:57:41.181947 containerd[1719]: time="2026-04-24T23:57:41.181849400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:57:41.181947 containerd[1719]: time="2026-04-24T23:57:41.181900000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.181947 containerd[1719]: time="2026-04-24T23:57:41.181920400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.181947 containerd[1719]: time="2026-04-24T23:57:41.181938200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.182100 containerd[1719]: time="2026-04-24T23:57:41.181960000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.182100 containerd[1719]: time="2026-04-24T23:57:41.181981600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.182100 containerd[1719]: time="2026-04-24T23:57:41.182000500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.182100 containerd[1719]: time="2026-04-24T23:57:41.182019100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.182100 containerd[1719]: time="2026-04-24T23:57:41.182047800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:57:41.182100 containerd[1719]: time="2026-04-24T23:57:41.182078200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182123700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182143700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182163700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182181100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182200100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182217800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182237500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182256400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182278900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182296200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182326 containerd[1719]: time="2026-04-24T23:57:41.182313200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182331100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182356600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182387500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182430100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182448100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182528200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182551600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182630100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182649000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182663100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182680200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182698200Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:57:41.182719 containerd[1719]: time="2026-04-24T23:57:41.182714300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:57:41.183217 containerd[1719]: time="2026-04-24T23:57:41.183122700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:57:41.183516 containerd[1719]: time="2026-04-24T23:57:41.183245300Z" level=info msg="Connect containerd service" Apr 24 23:57:41.183516 containerd[1719]: time="2026-04-24T23:57:41.183304100Z" level=info msg="using legacy CRI server" Apr 24 23:57:41.183516 containerd[1719]: time="2026-04-24T23:57:41.183325000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:57:41.183721 containerd[1719]: time="2026-04-24T23:57:41.183548000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:57:41.184322 containerd[1719]: time="2026-04-24T23:57:41.184209500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:57:41.184416 containerd[1719]: time="2026-04-24T23:57:41.184364700Z" level=info msg="Start subscribing containerd event" Apr 24 23:57:41.184456 containerd[1719]: time="2026-04-24T23:57:41.184440900Z" level=info msg="Start recovering state" Apr 24 23:57:41.184535 containerd[1719]: time="2026-04-24T23:57:41.184513100Z" level=info msg="Start event monitor" Apr 24 23:57:41.184815 containerd[1719]: time="2026-04-24T23:57:41.184538900Z" level=info msg="Start snapshots syncer" Apr 24 23:57:41.184815 containerd[1719]: time="2026-04-24T23:57:41.184551900Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:57:41.184815 containerd[1719]: time="2026-04-24T23:57:41.184563000Z" level=info msg="Start streaming server" Apr 24 23:57:41.185710 containerd[1719]: time="2026-04-24T23:57:41.185079400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:57:41.185710 containerd[1719]: time="2026-04-24T23:57:41.185181600Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:57:41.185710 containerd[1719]: time="2026-04-24T23:57:41.185315800Z" level=info msg="containerd successfully booted in 0.087188s" Apr 24 23:57:41.185421 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:57:41.696309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:57:41.702754 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:57:41.704258 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:57:41.706338 systemd[1]: Startup finished in 1.077s (kernel) + 12.084s (initrd) + 11.829s (userspace) = 24.990s. Apr 24 23:57:42.067138 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 24 23:57:42.070727 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 24 23:57:42.087283 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:57:42.087559 systemd-logind[1702]: New session 2 of user core. Apr 24 23:57:42.097717 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:57:42.104952 systemd-logind[1702]: New session 1 of user core. Apr 24 23:57:42.117834 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:57:42.128753 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:57:42.135037 (systemd)[1858]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:57:42.354103 systemd[1858]: Queued start job for default target default.target. Apr 24 23:57:42.358969 systemd[1858]: Created slice app.slice - User Application Slice. Apr 24 23:57:42.359006 systemd[1858]: Reached target paths.target - Paths. Apr 24 23:57:42.359025 systemd[1858]: Reached target timers.target - Timers. Apr 24 23:57:42.362372 systemd[1858]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:57:42.377243 systemd[1858]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:57:42.378126 systemd[1858]: Reached target sockets.target - Sockets. Apr 24 23:57:42.378147 systemd[1858]: Reached target basic.target - Basic System. Apr 24 23:57:42.378369 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:57:42.379797 systemd[1858]: Reached target default.target - Main User Target. Apr 24 23:57:42.379851 systemd[1858]: Startup finished in 233ms. Apr 24 23:57:42.386580 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:57:42.388714 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:57:42.417156 kubelet[1847]: E0424 23:57:42.417110 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:57:42.419570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:57:42.419794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:57:42.581698 waagent[1827]: 2026-04-24T23:57:42.581326Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.581903Z INFO Daemon Daemon OS: flatcar 4081.3.6 Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.583004Z INFO Daemon Daemon Python: 3.11.9 Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.584201Z INFO Daemon Daemon Run daemon Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.585125Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.586061Z INFO Daemon Daemon Using waagent for provisioning Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.587290Z INFO Daemon Daemon Activate resource disk Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.588204Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.593090Z INFO Daemon Daemon Found device: None Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.594032Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.595161Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.597592Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 24 23:57:42.624939 waagent[1827]: 2026-04-24T23:57:42.597809Z INFO Daemon Daemon Running default provisioning handler Apr 24 23:57:42.629006 waagent[1827]: 2026-04-24T23:57:42.628591Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 24 23:57:42.636597 waagent[1827]: 2026-04-24T23:57:42.636547Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 24 23:57:42.647439 waagent[1827]: 2026-04-24T23:57:42.636731Z INFO Daemon Daemon cloud-init is enabled: False Apr 24 23:57:42.647439 waagent[1827]: 2026-04-24T23:57:42.637249Z INFO Daemon Daemon Copying ovf-env.xml Apr 24 23:57:42.723432 waagent[1827]: 2026-04-24T23:57:42.718650Z INFO Daemon Daemon Successfully mounted dvd Apr 24 23:57:42.755458 waagent[1827]: 2026-04-24T23:57:42.748788Z INFO Daemon Daemon Detect protocol endpoint Apr 24 23:57:42.755458 waagent[1827]: 2026-04-24T23:57:42.749111Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 24 23:57:42.755458 waagent[1827]: 2026-04-24T23:57:42.750365Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 24 23:57:42.755458 waagent[1827]: 2026-04-24T23:57:42.750893Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 24 23:57:42.755458 waagent[1827]: 2026-04-24T23:57:42.751534Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 24 23:57:42.755458 waagent[1827]: 2026-04-24T23:57:42.752373Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 24 23:57:42.766871 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 24 23:57:42.788702 waagent[1827]: 2026-04-24T23:57:42.788649Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 24 23:57:42.797839 waagent[1827]: 2026-04-24T23:57:42.789074Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 24 23:57:42.797839 waagent[1827]: 2026-04-24T23:57:42.790001Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 24 23:57:42.981771 waagent[1827]: 2026-04-24T23:57:42.981596Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 24 23:57:42.989687 waagent[1827]: 2026-04-24T23:57:42.981961Z INFO Daemon Daemon Forcing an update of the goal state. Apr 24 23:57:42.991820 waagent[1827]: 2026-04-24T23:57:42.991758Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 24 23:57:43.008937 waagent[1827]: 2026-04-24T23:57:43.008881Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 24 23:57:43.030307 waagent[1827]: 2026-04-24T23:57:43.009541Z INFO Daemon Apr 24 23:57:43.030307 waagent[1827]: 2026-04-24T23:57:43.010173Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 30a5d0a5-3af8-4c63-972b-b8032e2eb6d8 eTag: 6082056389066466084 source: Fabric] Apr 24 23:57:43.030307 waagent[1827]: 2026-04-24T23:57:43.011481Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 24 23:57:43.030307 waagent[1827]: 2026-04-24T23:57:43.012291Z INFO Daemon Apr 24 23:57:43.030307 waagent[1827]: 2026-04-24T23:57:43.013354Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 24 23:57:43.030307 waagent[1827]: 2026-04-24T23:57:43.017148Z INFO Daemon Daemon Downloading artifacts profile blob Apr 24 23:57:43.171999 waagent[1827]: 2026-04-24T23:57:43.171911Z INFO Daemon Downloaded certificate {'thumbprint': '5AE7E30CB2204D9A9E89328990C197B66FB61770', 'hasPrivateKey': True} Apr 24 23:57:43.178349 waagent[1827]: 2026-04-24T23:57:43.178283Z INFO Daemon Fetch goal state completed Apr 24 23:57:43.227580 waagent[1827]: 2026-04-24T23:57:43.227517Z INFO Daemon Daemon Starting provisioning Apr 24 23:57:43.230707 waagent[1827]: 2026-04-24T23:57:43.230561Z INFO Daemon Daemon Handle ovf-env.xml. Apr 24 23:57:43.236376 waagent[1827]: 2026-04-24T23:57:43.230762Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-3087b9d021] Apr 24 23:57:43.259327 waagent[1827]: 2026-04-24T23:57:43.259263Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-3087b9d021] Apr 24 23:57:43.267942 waagent[1827]: 2026-04-24T23:57:43.259695Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 24 23:57:43.267942 waagent[1827]: 2026-04-24T23:57:43.260786Z INFO Daemon Daemon Primary interface is [eth0] Apr 24 23:57:43.285655 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:57:43.285663 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:57:43.285712 systemd-networkd[1354]: eth0: DHCP lease lost Apr 24 23:57:43.287093 waagent[1827]: 2026-04-24T23:57:43.287005Z INFO Daemon Daemon Create user account if not exists Apr 24 23:57:43.290315 waagent[1827]: 2026-04-24T23:57:43.290233Z INFO Daemon Daemon User core already exists, skip useradd Apr 24 23:57:43.305679 waagent[1827]: 2026-04-24T23:57:43.290462Z INFO Daemon Daemon Configure sudoer Apr 24 23:57:43.305679 waagent[1827]: 2026-04-24T23:57:43.291774Z INFO Daemon Daemon Configure sshd Apr 24 23:57:43.305679 waagent[1827]: 2026-04-24T23:57:43.292664Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 24 23:57:43.305679 waagent[1827]: 2026-04-24T23:57:43.293436Z INFO Daemon Daemon Deploy ssh public key. Apr 24 23:57:43.305550 systemd-networkd[1354]: eth0: DHCPv6 lease lost Apr 24 23:57:43.347519 systemd-networkd[1354]: eth0: DHCPv4 address 10.0.0.19/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 24 23:57:52.670604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:57:52.676624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:57:52.832297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:57:52.837108 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:57:52.875828 kubelet[1919]: E0424 23:57:52.875773 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:57:52.879502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:57:52.879719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:58:02.906787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:58:02.914960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:58:03.020990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:58:03.033731 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:58:03.070286 kubelet[1935]: E0424 23:58:03.070210 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:58:03.072828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:58:03.073046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:58:03.691960 chronyd[1706]: Selected source PHC0 Apr 24 23:58:13.157277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:58:13.164963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:58:13.281768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:58:13.293248 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:58:13.330323 kubelet[1950]: E0424 23:58:13.330268 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:58:13.332786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:58:13.333010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:58:13.357152 waagent[1827]: 2026-04-24T23:58:13.357092Z INFO Daemon Daemon Provisioning complete Apr 24 23:58:13.369448 waagent[1827]: 2026-04-24T23:58:13.369360Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 24 23:58:13.377705 waagent[1827]: 2026-04-24T23:58:13.369684Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 24 23:58:13.377705 waagent[1827]: 2026-04-24T23:58:13.370841Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 24 23:58:13.496267 waagent[1957]: 2026-04-24T23:58:13.496100Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 24 23:58:13.496703 waagent[1957]: 2026-04-24T23:58:13.496264Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Apr 24 23:58:13.496703 waagent[1957]: 2026-04-24T23:58:13.496354Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 24 23:58:14.051511 waagent[1957]: 2026-04-24T23:58:14.051382Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 24 23:58:14.051775 waagent[1957]: 2026-04-24T23:58:14.051717Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:58:14.051885 waagent[1957]: 2026-04-24T23:58:14.051839Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:58:14.059485 waagent[1957]: 2026-04-24T23:58:14.059395Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 24 23:58:14.069367 waagent[1957]: 2026-04-24T23:58:14.069306Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 24 23:58:14.069869 waagent[1957]: 2026-04-24T23:58:14.069809Z INFO ExtHandler Apr 24 23:58:14.069960 waagent[1957]: 2026-04-24T23:58:14.069910Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a72520da-092c-4f39-b2e9-e5ab57e98855 eTag: 6082056389066466084 source: Fabric] Apr 24 23:58:14.070278 waagent[1957]: 2026-04-24T23:58:14.070228Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 24 23:58:14.070916 waagent[1957]: 2026-04-24T23:58:14.070860Z INFO ExtHandler Apr 24 23:58:14.070992 waagent[1957]: 2026-04-24T23:58:14.070950Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 24 23:58:14.074129 waagent[1957]: 2026-04-24T23:58:14.074085Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 24 23:58:14.132213 waagent[1957]: 2026-04-24T23:58:14.132146Z INFO ExtHandler Downloaded certificate {'thumbprint': '5AE7E30CB2204D9A9E89328990C197B66FB61770', 'hasPrivateKey': True} Apr 24 23:58:14.132726 waagent[1957]: 2026-04-24T23:58:14.132669Z INFO ExtHandler Fetch goal state completed Apr 24 23:58:14.146888 waagent[1957]: 2026-04-24T23:58:14.146823Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1957 Apr 24 23:58:14.147050 waagent[1957]: 2026-04-24T23:58:14.147000Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 24 23:58:14.148662 waagent[1957]: 2026-04-24T23:58:14.148602Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Apr 24 23:58:14.149037 waagent[1957]: 2026-04-24T23:58:14.148985Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 24 23:58:14.181917 waagent[1957]: 2026-04-24T23:58:14.181871Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 24 23:58:14.182128 waagent[1957]: 2026-04-24T23:58:14.182079Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 24 23:58:14.189184 waagent[1957]: 2026-04-24T23:58:14.189144Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 24 23:58:14.196216 systemd[1]: Reloading requested from client PID 1970 ('systemctl') (unit waagent.service)... Apr 24 23:58:14.196237 systemd[1]: Reloading... Apr 24 23:58:14.278451 zram_generator::config[2001]: No configuration found. Apr 24 23:58:14.411788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:58:14.491854 systemd[1]: Reloading finished in 295 ms. Apr 24 23:58:14.521133 waagent[1957]: 2026-04-24T23:58:14.520706Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 24 23:58:14.529067 systemd[1]: Reloading requested from client PID 2064 ('systemctl') (unit waagent.service)... Apr 24 23:58:14.529084 systemd[1]: Reloading... Apr 24 23:58:14.615546 zram_generator::config[2098]: No configuration found. Apr 24 23:58:14.729691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:58:14.807154 systemd[1]: Reloading finished in 277 ms. Apr 24 23:58:14.837452 waagent[1957]: 2026-04-24T23:58:14.836624Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 24 23:58:14.837452 waagent[1957]: 2026-04-24T23:58:14.836818Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 24 23:58:15.306891 waagent[1957]: 2026-04-24T23:58:15.306785Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 24 23:58:15.307511 waagent[1957]: 2026-04-24T23:58:15.307436Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 24 23:58:15.308337 waagent[1957]: 2026-04-24T23:58:15.308259Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 24 23:58:15.308873 waagent[1957]: 2026-04-24T23:58:15.308795Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 24 23:58:15.309169 waagent[1957]: 2026-04-24T23:58:15.309115Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:58:15.309384 waagent[1957]: 2026-04-24T23:58:15.309332Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 24 23:58:15.309796 waagent[1957]: 2026-04-24T23:58:15.309713Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 24 23:58:15.309866 waagent[1957]: 2026-04-24T23:58:15.309798Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 24 23:58:15.310176 waagent[1957]: 2026-04-24T23:58:15.310129Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:58:15.310321 waagent[1957]: 2026-04-24T23:58:15.310255Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 24 23:58:15.310525 waagent[1957]: 2026-04-24T23:58:15.310481Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 24 23:58:15.310619 waagent[1957]: 2026-04-24T23:58:15.310535Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 24 23:58:15.311080 waagent[1957]: 2026-04-24T23:58:15.311018Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 24 23:58:15.311416 waagent[1957]: 2026-04-24T23:58:15.311328Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 24 23:58:15.311888 waagent[1957]: 2026-04-24T23:58:15.311809Z INFO EnvHandler ExtHandler Configure routes Apr 24 23:58:15.312543 waagent[1957]: 2026-04-24T23:58:15.312190Z INFO EnvHandler ExtHandler Gateway:None Apr 24 23:58:15.312880 waagent[1957]: 2026-04-24T23:58:15.312832Z INFO EnvHandler ExtHandler Routes:None Apr 24 23:58:15.313639 waagent[1957]: 2026-04-24T23:58:15.313588Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 24 23:58:15.313639 waagent[1957]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 24 23:58:15.313639 waagent[1957]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 24 23:58:15.313639 waagent[1957]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 24 23:58:15.313639 waagent[1957]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:58:15.313639 waagent[1957]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:58:15.313639 waagent[1957]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 24 23:58:15.320301 waagent[1957]: 2026-04-24T23:58:15.320207Z INFO ExtHandler ExtHandler Apr 24 23:58:15.323714 waagent[1957]: 2026-04-24T23:58:15.323656Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fb764318-b4d6-45f9-b03e-5daf63429b89 correlation 924ecf79-a48c-4e7d-b3ae-ad938f8c5997 created: 2026-04-24T23:56:50.493988Z] Apr 24 23:58:15.326439 waagent[1957]: 2026-04-24T23:58:15.325644Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 24 23:58:15.327281 waagent[1957]: 2026-04-24T23:58:15.327206Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 7 ms] Apr 24 23:58:15.362498 waagent[1957]: 2026-04-24T23:58:15.361507Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5EFFA913-DC0A-4DDE-B8E4-9A959DBDBFE8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 24 23:58:15.368500 waagent[1957]: 2026-04-24T23:58:15.368438Z INFO MonitorHandler ExtHandler Network interfaces: Apr 24 23:58:15.368500 waagent[1957]: Executing ['ip', '-a', '-o', 'link']: Apr 24 23:58:15.368500 waagent[1957]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 24 23:58:15.368500 waagent[1957]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:be:fc brd ff:ff:ff:ff:ff:ff Apr 24 23:58:15.368500 waagent[1957]: 3: enP32357s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:36:be:fc brd ff:ff:ff:ff:ff:ff\ altname enP32357p0s2 Apr 24 23:58:15.368500 waagent[1957]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 24 23:58:15.368500 waagent[1957]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 24 23:58:15.368500 waagent[1957]: 2: eth0 inet 10.0.0.19/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 24 23:58:15.368500 waagent[1957]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 24 23:58:15.368500 waagent[1957]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 24 23:58:15.368500 waagent[1957]: 2: eth0 inet6 fe80::7e1e:52ff:fe36:befc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 24 23:58:15.419999 waagent[1957]: 2026-04-24T23:58:15.419928Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 24 23:58:15.419999 waagent[1957]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:58:15.419999 waagent[1957]: pkts bytes target prot opt in out source destination Apr 24 23:58:15.419999 waagent[1957]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:58:15.419999 waagent[1957]: pkts bytes target prot opt in out source destination Apr 24 23:58:15.419999 waagent[1957]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:58:15.419999 waagent[1957]: pkts bytes target prot opt in out source destination Apr 24 23:58:15.419999 waagent[1957]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 24 23:58:15.419999 waagent[1957]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 24 23:58:15.419999 waagent[1957]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 24 23:58:15.423344 waagent[1957]: 2026-04-24T23:58:15.423286Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 24 23:58:15.423344 waagent[1957]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:58:15.423344 waagent[1957]: pkts bytes target prot opt in out source destination Apr 24 23:58:15.423344 waagent[1957]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:58:15.423344 waagent[1957]: pkts bytes target prot opt in out source destination Apr 24 23:58:15.423344 waagent[1957]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 24 23:58:15.423344 waagent[1957]: pkts bytes target prot opt in out source destination Apr 24 23:58:15.423344 waagent[1957]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 24 23:58:15.423344 waagent[1957]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 24 23:58:15.423344 waagent[1957]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 24 23:58:15.423745 waagent[1957]: 2026-04-24T23:58:15.423629Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 24 23:58:23.406810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 24 23:58:23.414610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:58:23.433470 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 24 23:58:23.842061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:58:23.846448 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:58:24.171709 kubelet[2194]: E0424 23:58:24.171629 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:58:24.174179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:58:24.174417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:58:25.757772 update_engine[1710]: I20260424 23:58:25.757586 1710 update_attempter.cc:509] Updating boot flags... Apr 24 23:58:25.825451 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2213) Apr 24 23:58:25.925545 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2217) Apr 24 23:58:34.407008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 24 23:58:34.412636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:58:34.873319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:58:34.877593 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:58:34.915912 kubelet[2275]: E0424 23:58:34.915848 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:58:34.918127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:58:34.918357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:58:37.329988 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:58:37.337233 systemd[1]: Started sshd@0-10.0.0.19:22-4.175.71.9:50154.service - OpenSSH per-connection server daemon (4.175.71.9:50154). Apr 24 23:58:38.428925 sshd[2283]: Accepted publickey for core from 4.175.71.9 port 50154 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:38.430521 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:38.434660 systemd-logind[1702]: New session 3 of user core. Apr 24 23:58:38.441817 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:58:38.561421 systemd[1]: Started sshd@1-10.0.0.19:22-4.175.71.9:50166.service - OpenSSH per-connection server daemon (4.175.71.9:50166). Apr 24 23:58:38.674715 sshd[2288]: Accepted publickey for core from 4.175.71.9 port 50166 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:38.676128 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:38.680686 systemd-logind[1702]: New session 4 of user core. Apr 24 23:58:38.691573 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:58:38.785184 sshd[2288]: pam_unix(sshd:session): session closed for user core Apr 24 23:58:38.788890 systemd[1]: sshd@1-10.0.0.19:22-4.175.71.9:50166.service: Deactivated successfully. Apr 24 23:58:38.790627 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:58:38.791580 systemd-logind[1702]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:58:38.792689 systemd-logind[1702]: Removed session 4. Apr 24 23:58:38.808095 systemd[1]: Started sshd@2-10.0.0.19:22-4.175.71.9:50182.service - OpenSSH per-connection server daemon (4.175.71.9:50182). Apr 24 23:58:38.921604 sshd[2295]: Accepted publickey for core from 4.175.71.9 port 50182 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:38.923036 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:38.927966 systemd-logind[1702]: New session 5 of user core. Apr 24 23:58:38.933822 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:58:39.023479 sshd[2295]: pam_unix(sshd:session): session closed for user core Apr 24 23:58:39.027417 systemd[1]: sshd@2-10.0.0.19:22-4.175.71.9:50182.service: Deactivated successfully. Apr 24 23:58:39.029260 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:58:39.029955 systemd-logind[1702]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:58:39.030862 systemd-logind[1702]: Removed session 5. Apr 24 23:58:39.045158 systemd[1]: Started sshd@3-10.0.0.19:22-4.175.71.9:50190.service - OpenSSH per-connection server daemon (4.175.71.9:50190). Apr 24 23:58:39.158447 sshd[2302]: Accepted publickey for core from 4.175.71.9 port 50190 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:39.159004 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:39.163980 systemd-logind[1702]: New session 6 of user core. Apr 24 23:58:39.172857 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:58:39.267817 sshd[2302]: pam_unix(sshd:session): session closed for user core Apr 24 23:58:39.270890 systemd[1]: sshd@3-10.0.0.19:22-4.175.71.9:50190.service: Deactivated successfully. Apr 24 23:58:39.273019 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:58:39.274519 systemd-logind[1702]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:58:39.275715 systemd-logind[1702]: Removed session 6. Apr 24 23:58:39.289947 systemd[1]: Started sshd@4-10.0.0.19:22-4.175.71.9:50198.service - OpenSSH per-connection server daemon (4.175.71.9:50198). Apr 24 23:58:39.406220 sshd[2309]: Accepted publickey for core from 4.175.71.9 port 50198 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:39.407687 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:39.412711 systemd-logind[1702]: New session 7 of user core. Apr 24 23:58:39.415849 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:58:40.561548 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:58:40.561925 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:58:40.589430 sudo[2312]: pam_unix(sudo:session): session closed for user root Apr 24 23:58:40.604754 sshd[2309]: pam_unix(sshd:session): session closed for user core Apr 24 23:58:40.608020 systemd[1]: sshd@4-10.0.0.19:22-4.175.71.9:50198.service: Deactivated successfully. Apr 24 23:58:40.610046 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:58:40.611580 systemd-logind[1702]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:58:40.612817 systemd-logind[1702]: Removed session 7. Apr 24 23:58:40.629078 systemd[1]: Started sshd@5-10.0.0.19:22-4.175.71.9:50208.service - OpenSSH per-connection server daemon (4.175.71.9:50208). Apr 24 23:58:40.746876 sshd[2317]: Accepted publickey for core from 4.175.71.9 port 50208 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:40.748104 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:40.752895 systemd-logind[1702]: New session 8 of user core. Apr 24 23:58:40.760837 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:58:40.840505 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:58:40.840852 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:58:40.843973 sudo[2321]: pam_unix(sudo:session): session closed for user root Apr 24 23:58:40.848914 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:58:40.849252 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:58:40.860684 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:58:40.863140 auditctl[2324]: No rules Apr 24 23:58:40.863501 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:58:40.863700 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:58:40.866225 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:58:40.892100 augenrules[2342]: No rules Apr 24 23:58:40.893548 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:58:40.894740 sudo[2320]: pam_unix(sudo:session): session closed for user root Apr 24 23:58:40.910170 sshd[2317]: pam_unix(sshd:session): session closed for user core Apr 24 23:58:40.914340 systemd-logind[1702]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:58:40.914738 systemd[1]: sshd@5-10.0.0.19:22-4.175.71.9:50208.service: Deactivated successfully. Apr 24 23:58:40.916643 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:58:40.917489 systemd-logind[1702]: Removed session 8. Apr 24 23:58:40.932094 systemd[1]: Started sshd@6-10.0.0.19:22-4.175.71.9:50212.service - OpenSSH per-connection server daemon (4.175.71.9:50212). Apr 24 23:58:41.048148 sshd[2350]: Accepted publickey for core from 4.175.71.9 port 50212 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 24 23:58:41.049560 sshd[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:58:41.053470 systemd-logind[1702]: New session 9 of user core. Apr 24 23:58:41.060715 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:58:41.141228 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:58:41.141606 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:58:42.571059 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:58:42.571141 (dockerd)[2369]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:58:43.719429 dockerd[2369]: time="2026-04-24T23:58:43.718216155Z" level=info msg="Starting up" Apr 24 23:58:44.111969 dockerd[2369]: time="2026-04-24T23:58:44.111703457Z" level=info msg="Loading containers: start." Apr 24 23:58:44.256433 kernel: Initializing XFRM netlink socket Apr 24 23:58:44.413874 systemd-networkd[1354]: docker0: Link UP Apr 24 23:58:44.444617 dockerd[2369]: time="2026-04-24T23:58:44.444581900Z" level=info msg="Loading containers: done." Apr 24 23:58:44.502791 dockerd[2369]: time="2026-04-24T23:58:44.502744772Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:58:44.503018 dockerd[2369]: time="2026-04-24T23:58:44.502864873Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:58:44.503018 dockerd[2369]: time="2026-04-24T23:58:44.502990375Z" level=info msg="Daemon has completed initialization" Apr 24 23:58:44.562468 dockerd[2369]: time="2026-04-24T23:58:44.561895955Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:58:44.562007 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:58:45.156710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 24 23:58:45.162653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:58:45.179755 containerd[1719]: time="2026-04-24T23:58:45.179586086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 24 23:58:45.314910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:58:45.319348 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:58:45.357950 kubelet[2513]: E0424 23:58:45.357904 2513 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:58:45.359577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:58:45.359764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:58:46.441827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262990160.mount: Deactivated successfully. Apr 24 23:58:47.965960 containerd[1719]: time="2026-04-24T23:58:47.965903656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:47.968551 containerd[1719]: time="2026-04-24T23:58:47.968364484Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100522" Apr 24 23:58:47.971928 containerd[1719]: time="2026-04-24T23:58:47.971847124Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:47.977089 containerd[1719]: time="2026-04-24T23:58:47.976327876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:47.977812 containerd[1719]: time="2026-04-24T23:58:47.977772993Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.798134506s" Apr 24 23:58:47.977919 containerd[1719]: time="2026-04-24T23:58:47.977819193Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 24 23:58:47.978451 containerd[1719]: time="2026-04-24T23:58:47.978416000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 24 23:58:49.595447 containerd[1719]: time="2026-04-24T23:58:49.595378869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:49.597893 containerd[1719]: time="2026-04-24T23:58:49.597579194Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252746" Apr 24 23:58:49.600799 containerd[1719]: time="2026-04-24T23:58:49.600767831Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:49.606143 containerd[1719]: time="2026-04-24T23:58:49.605339784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:49.606911 containerd[1719]: time="2026-04-24T23:58:49.606873801Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.628418901s" Apr 24 23:58:49.606984 containerd[1719]: time="2026-04-24T23:58:49.606917202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 24 23:58:49.607675 containerd[1719]: time="2026-04-24T23:58:49.607636010Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 24 23:58:50.734592 containerd[1719]: time="2026-04-24T23:58:50.734535432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:50.737723 containerd[1719]: time="2026-04-24T23:58:50.737037064Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810899" Apr 24 23:58:50.741218 containerd[1719]: time="2026-04-24T23:58:50.740797113Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:50.746721 containerd[1719]: time="2026-04-24T23:58:50.746667089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:50.748431 containerd[1719]: time="2026-04-24T23:58:50.747777003Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.140101493s" Apr 24 23:58:50.748431 containerd[1719]: time="2026-04-24T23:58:50.747816204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 24 23:58:50.748757 containerd[1719]: time="2026-04-24T23:58:50.748728515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 24 23:58:51.938319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453613824.mount: Deactivated successfully. Apr 24 23:58:52.521594 containerd[1719]: time="2026-04-24T23:58:52.521531709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:52.569480 containerd[1719]: time="2026-04-24T23:58:52.569409874Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972962" Apr 24 23:58:52.572338 containerd[1719]: time="2026-04-24T23:58:52.572270826Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:52.615572 containerd[1719]: time="2026-04-24T23:58:52.615509107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:58:52.616550 containerd[1719]: time="2026-04-24T23:58:52.616294521Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.867526605s" Apr 24 23:58:52.616550 containerd[1719]: time="2026-04-24T23:58:52.616337822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 24 23:58:52.617219 containerd[1719]: time="2026-04-24T23:58:52.617169637Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 24 23:58:54.074049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901574980.mount: Deactivated successfully. Apr 24 23:58:55.406755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 24 23:58:55.411659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:58:55.710219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:58:55.714574 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:58:55.747954 kubelet[2606]: E0424 23:58:55.747906 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:58:55.750167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:58:55.750441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:59:05.906811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 24 23:59:05.912647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:06.027885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:06.032132 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:59:06.068513 kubelet[2645]: E0424 23:59:06.068438 2645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:59:06.070787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:59:06.071024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:59:10.783755 containerd[1719]: time="2026-04-24T23:59:10.783700095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:10.786047 containerd[1719]: time="2026-04-24T23:59:10.785838223Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Apr 24 23:59:10.789076 containerd[1719]: time="2026-04-24T23:59:10.789008565Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:10.793800 containerd[1719]: time="2026-04-24T23:59:10.793666126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:10.796306 containerd[1719]: time="2026-04-24T23:59:10.795498650Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 18.178274311s" Apr 24 23:59:10.796306 containerd[1719]: time="2026-04-24T23:59:10.795538850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 24 23:59:10.796965 containerd[1719]: time="2026-04-24T23:59:10.796926868Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 23:59:11.826778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828818775.mount: Deactivated successfully. Apr 24 23:59:12.020282 containerd[1719]: time="2026-04-24T23:59:12.020219102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:12.023952 containerd[1719]: time="2026-04-24T23:59:12.023755048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Apr 24 23:59:12.066991 containerd[1719]: time="2026-04-24T23:59:12.066663711Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:12.114925 containerd[1719]: time="2026-04-24T23:59:12.114775441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:12.116246 containerd[1719]: time="2026-04-24T23:59:12.115794554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.318826985s" Apr 24 23:59:12.116246 containerd[1719]: time="2026-04-24T23:59:12.115835855Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 23:59:12.116422 containerd[1719]: time="2026-04-24T23:59:12.116384762Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 24 23:59:13.629153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468177467.mount: Deactivated successfully. Apr 24 23:59:16.156781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 24 23:59:16.163992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:16.285000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:16.289722 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:59:16.326591 kubelet[2700]: E0424 23:59:16.326503 2700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:59:16.328863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:59:16.329083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:59:26.013248 containerd[1719]: time="2026-04-24T23:59:26.013180617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:26.059782 containerd[1719]: time="2026-04-24T23:59:26.059683812Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874825" Apr 24 23:59:26.063031 containerd[1719]: time="2026-04-24T23:59:26.062972654Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:26.116231 containerd[1719]: time="2026-04-24T23:59:26.115377125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:26.117306 containerd[1719]: time="2026-04-24T23:59:26.116972846Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 14.000541783s" Apr 24 23:59:26.117306 containerd[1719]: time="2026-04-24T23:59:26.117008146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 24 23:59:26.371563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 24 23:59:26.377716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:26.524639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:26.529820 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:59:26.574428 kubelet[2784]: E0424 23:59:26.573715 2784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:59:26.577020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:59:26.577188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:59:29.572794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:29.578677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:29.610804 systemd[1]: Reloading requested from client PID 2799 ('systemctl') (unit session-9.scope)... Apr 24 23:59:29.610820 systemd[1]: Reloading... Apr 24 23:59:29.750451 zram_generator::config[2839]: No configuration found. Apr 24 23:59:29.872281 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:59:29.953245 systemd[1]: Reloading finished in 341 ms. Apr 24 23:59:30.024781 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:59:30.024892 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:59:30.025169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:30.030741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:31.178870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:31.185118 (kubelet)[2908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:59:31.229785 kubelet[2908]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:59:31.230191 kubelet[2908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:59:31.230296 kubelet[2908]: I0424 23:59:31.230263 2908 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:59:31.836978 kubelet[2908]: I0424 23:59:31.836939 2908 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 23:59:31.837245 kubelet[2908]: I0424 23:59:31.837156 2908 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:59:32.471448 kubelet[2908]: I0424 23:59:32.470791 2908 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:59:32.471448 kubelet[2908]: I0424 23:59:32.470849 2908 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:59:32.471448 kubelet[2908]: I0424 23:59:32.471377 2908 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:59:35.229434 kubelet[2908]: E0424 23:59:35.229165 2908 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:59:35.230182 kubelet[2908]: I0424 23:59:35.230150 2908 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:59:35.234961 kubelet[2908]: E0424 23:59:35.234655 2908 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:59:35.234961 kubelet[2908]: I0424 23:59:35.234715 2908 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:59:35.238316 kubelet[2908]: I0424 23:59:35.238272 2908 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:59:35.239378 kubelet[2908]: I0424 23:59:35.239334 2908 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:59:35.239573 kubelet[2908]: I0424 23:59:35.239378 2908 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-3087b9d021","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:59:35.239740 kubelet[2908]: I0424 23:59:35.239581 2908 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:59:35.239740 kubelet[2908]: I0424 23:59:35.239595 2908 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 23:59:35.239740 kubelet[2908]: I0424 23:59:35.239706 2908 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:59:35.267901 kubelet[2908]: I0424 23:59:35.267876 2908 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:35.268101 kubelet[2908]: I0424 23:59:35.268086 2908 kubelet.go:475] "Attempting to sync node with API server" Apr 24 23:59:35.268187 kubelet[2908]: I0424 23:59:35.268114 2908 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:59:35.268187 kubelet[2908]: I0424 23:59:35.268150 2908 kubelet.go:387] "Adding apiserver pod source" Apr 24 23:59:35.268187 kubelet[2908]: I0424 23:59:35.268171 2908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:59:35.273368 kubelet[2908]: E0424 23:59:35.273105 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:59:35.273731 kubelet[2908]: I0424 23:59:35.273631 2908 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:59:35.274573 kubelet[2908]: I0424 23:59:35.274471 2908 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:59:35.274573 kubelet[2908]: I0424 23:59:35.274524 2908 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:59:35.274870 kubelet[2908]: W0424 23:59:35.274738 2908 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:59:35.276065 kubelet[2908]: E0424 23:59:35.275769 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-3087b9d021&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:59:35.278259 kubelet[2908]: I0424 23:59:35.278226 2908 server.go:1262] "Started kubelet" Apr 24 23:59:35.280805 kubelet[2908]: I0424 23:59:35.280303 2908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:59:35.281923 kubelet[2908]: I0424 23:59:35.281695 2908 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:59:35.286242 kubelet[2908]: I0424 23:59:35.285931 2908 server.go:310] "Adding debug handlers to kubelet server" Apr 24 23:59:35.287724 kubelet[2908]: E0424 23:59:35.286142 2908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-3087b9d021.18a97074e19eec35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-3087b9d021,UID:ci-4081.3.6-n-3087b9d021,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-3087b9d021,},FirstTimestamp:2026-04-24 23:59:35.278177333 +0000 UTC m=+4.088748438,LastTimestamp:2026-04-24 23:59:35.278177333 +0000 UTC m=+4.088748438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-3087b9d021,}" Apr 24 23:59:35.293577 kubelet[2908]: I0424 23:59:35.293473 2908 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:59:35.293577 kubelet[2908]: I0424 23:59:35.293565 2908 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:59:35.293881 kubelet[2908]: I0424 23:59:35.293783 2908 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:59:35.294196 kubelet[2908]: I0424 23:59:35.294093 2908 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:59:35.296065 kubelet[2908]: I0424 23:59:35.296036 2908 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 23:59:35.296274 kubelet[2908]: E0424 23:59:35.296249 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:35.298147 kubelet[2908]: I0424 23:59:35.298122 2908 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:59:35.298229 kubelet[2908]: I0424 23:59:35.298184 2908 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:59:35.299600 kubelet[2908]: E0424 23:59:35.298886 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:59:35.299600 kubelet[2908]: E0424 23:59:35.298986 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-3087b9d021?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Apr 24 23:59:35.301492 kubelet[2908]: E0424 23:59:35.301465 2908 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:59:35.303242 kubelet[2908]: I0424 23:59:35.303222 2908 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:59:35.303364 kubelet[2908]: I0424 23:59:35.303352 2908 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:59:35.303648 kubelet[2908]: I0424 23:59:35.303627 2908 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:59:35.337783 kubelet[2908]: I0424 23:59:35.337732 2908 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:59:35.340684 kubelet[2908]: I0424 23:59:35.340654 2908 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:59:35.340784 kubelet[2908]: I0424 23:59:35.340690 2908 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 23:59:35.340784 kubelet[2908]: I0424 23:59:35.340717 2908 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 23:59:35.341509 kubelet[2908]: E0424 23:59:35.340923 2908 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:59:35.342959 kubelet[2908]: E0424 23:59:35.342926 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:59:35.377219 kubelet[2908]: I0424 23:59:35.377182 2908 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:59:35.377219 kubelet[2908]: I0424 23:59:35.377218 2908 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:59:35.377377 kubelet[2908]: I0424 23:59:35.377239 2908 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:35.382538 kubelet[2908]: I0424 23:59:35.382507 2908 policy_none.go:49] "None policy: Start" Apr 24 23:59:35.382538 kubelet[2908]: I0424 23:59:35.382531 2908 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:59:35.382674 kubelet[2908]: I0424 23:59:35.382548 2908 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:59:35.386708 kubelet[2908]: I0424 23:59:35.386684 2908 policy_none.go:47] "Start" Apr 24 23:59:35.391513 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:59:35.397351 kubelet[2908]: E0424 23:59:35.397310 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:35.399912 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:59:35.403397 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:59:35.415136 kubelet[2908]: E0424 23:59:35.415113 2908 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:59:35.415940 kubelet[2908]: I0424 23:59:35.415328 2908 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:59:35.415940 kubelet[2908]: I0424 23:59:35.415349 2908 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:59:35.415940 kubelet[2908]: I0424 23:59:35.415703 2908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:59:35.417724 kubelet[2908]: E0424 23:59:35.417704 2908 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:59:35.417870 kubelet[2908]: E0424 23:59:35.417856 2908 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:35.458159 systemd[1]: Created slice kubepods-burstable-pod06163201dca1cb5337deff7d0e623aac.slice - libcontainer container kubepods-burstable-pod06163201dca1cb5337deff7d0e623aac.slice. Apr 24 23:59:35.468118 kubelet[2908]: E0424 23:59:35.468083 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.472338 systemd[1]: Created slice kubepods-burstable-pod5f0e9ee66d919a2e8ab7a1171b97ac60.slice - libcontainer container kubepods-burstable-pod5f0e9ee66d919a2e8ab7a1171b97ac60.slice. Apr 24 23:59:35.480884 kubelet[2908]: E0424 23:59:35.480554 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.484751 systemd[1]: Created slice kubepods-burstable-pod5cabb0160f16c489ea29538b45990858.slice - libcontainer container kubepods-burstable-pod5cabb0160f16c489ea29538b45990858.slice. Apr 24 23:59:35.486645 kubelet[2908]: E0424 23:59:35.486620 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500041 kubelet[2908]: I0424 23:59:35.499861 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f0e9ee66d919a2e8ab7a1171b97ac60-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-3087b9d021\" (UID: \"5f0e9ee66d919a2e8ab7a1171b97ac60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500041 kubelet[2908]: I0424 23:59:35.499910 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500041 kubelet[2908]: I0424 23:59:35.499934 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500041 kubelet[2908]: I0424 23:59:35.499956 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06163201dca1cb5337deff7d0e623aac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-3087b9d021\" (UID: \"06163201dca1cb5337deff7d0e623aac\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500041 kubelet[2908]: I0424 23:59:35.499978 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500706 kubelet[2908]: I0424 23:59:35.500529 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500706 kubelet[2908]: I0424 23:59:35.500569 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500706 kubelet[2908]: I0424 23:59:35.500594 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f0e9ee66d919a2e8ab7a1171b97ac60-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-3087b9d021\" (UID: \"5f0e9ee66d919a2e8ab7a1171b97ac60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.500706 kubelet[2908]: E0424 23:59:35.500565 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-3087b9d021?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Apr 24 23:59:35.500706 kubelet[2908]: I0424 23:59:35.500651 2908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f0e9ee66d919a2e8ab7a1171b97ac60-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-3087b9d021\" (UID: \"5f0e9ee66d919a2e8ab7a1171b97ac60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.518105 kubelet[2908]: I0424 23:59:35.518083 2908 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.518502 kubelet[2908]: E0424 23:59:35.518450 2908 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.721209 kubelet[2908]: I0424 23:59:35.721173 2908 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.721633 kubelet[2908]: E0424 23:59:35.721578 2908 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:35.775343 containerd[1719]: time="2026-04-24T23:59:35.775221641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-3087b9d021,Uid:06163201dca1cb5337deff7d0e623aac,Namespace:kube-system,Attempt:0,}" Apr 24 23:59:35.824085 containerd[1719]: time="2026-04-24T23:59:35.824040200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-3087b9d021,Uid:5f0e9ee66d919a2e8ab7a1171b97ac60,Namespace:kube-system,Attempt:0,}" Apr 24 23:59:35.868028 containerd[1719]: time="2026-04-24T23:59:35.867957392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-3087b9d021,Uid:5cabb0160f16c489ea29538b45990858,Namespace:kube-system,Attempt:0,}" Apr 24 23:59:35.901079 kubelet[2908]: E0424 23:59:35.901019 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-3087b9d021?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Apr 24 23:59:36.124572 kubelet[2908]: I0424 23:59:36.124454 2908 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:36.124917 kubelet[2908]: E0424 23:59:36.124875 2908 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:36.206334 kubelet[2908]: E0424 23:59:36.206274 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-3087b9d021&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:59:36.348876 kubelet[2908]: E0424 23:59:36.348802 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:59:36.459231 kubelet[2908]: E0424 23:59:36.459187 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:59:36.487594 kubelet[2908]: E0424 23:59:36.487552 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:59:36.702551 kubelet[2908]: E0424 23:59:36.702393 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-3087b9d021?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Apr 24 23:59:36.928065 kubelet[2908]: I0424 23:59:36.927638 2908 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:36.928065 kubelet[2908]: E0424 23:59:36.928011 2908 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:37.409383 kubelet[2908]: E0424 23:59:37.409343 2908 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:59:38.096640 kubelet[2908]: E0424 23:59:38.096596 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-3087b9d021&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:59:38.303419 kubelet[2908]: E0424 23:59:38.303364 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-3087b9d021?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="3.2s" Apr 24 23:59:38.530716 kubelet[2908]: I0424 23:59:38.530683 2908 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:38.531224 kubelet[2908]: E0424 23:59:38.531123 2908 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:38.885266 kubelet[2908]: E0424 23:59:38.885143 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:59:38.980038 kubelet[2908]: E0424 23:59:38.979988 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:59:39.399554 kubelet[2908]: E0424 23:59:39.399495 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:59:39.427614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872279870.mount: Deactivated successfully. Apr 24 23:59:39.666383 containerd[1719]: time="2026-04-24T23:59:39.666314455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:39.762093 containerd[1719]: time="2026-04-24T23:59:39.761756443Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:59:39.808398 containerd[1719]: time="2026-04-24T23:59:39.808357972Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:39.811408 containerd[1719]: time="2026-04-24T23:59:39.811356412Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:39.861846 containerd[1719]: time="2026-04-24T23:59:39.861800785Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:39.918938 containerd[1719]: time="2026-04-24T23:59:39.918453522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 24 23:59:39.922582 containerd[1719]: time="2026-04-24T23:59:39.922514675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:59:39.965466 containerd[1719]: time="2026-04-24T23:59:39.965152729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:39.966598 containerd[1719]: time="2026-04-24T23:59:39.966079241Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.14194744s" Apr 24 23:59:39.968003 containerd[1719]: time="2026-04-24T23:59:39.967885665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.099845072s" Apr 24 23:59:40.065435 containerd[1719]: time="2026-04-24T23:59:40.064399220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.289083278s" Apr 24 23:59:41.288025 kubelet[2908]: E0424 23:59:41.287919 2908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-3087b9d021.18a97074e19eec35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-3087b9d021,UID:ci-4081.3.6-n-3087b9d021,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-3087b9d021,},FirstTimestamp:2026-04-24 23:59:35.278177333 +0000 UTC m=+4.088748438,LastTimestamp:2026-04-24 23:59:35.278177333 +0000 UTC m=+4.088748438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-3087b9d021,}" Apr 24 23:59:41.440845 containerd[1719]: time="2026-04-24T23:59:41.440736124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:41.440845 containerd[1719]: time="2026-04-24T23:59:41.440801325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:41.441425 containerd[1719]: time="2026-04-24T23:59:41.440824125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:41.441425 containerd[1719]: time="2026-04-24T23:59:41.440916326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:41.442026 containerd[1719]: time="2026-04-24T23:59:41.441740137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:41.442026 containerd[1719]: time="2026-04-24T23:59:41.441785938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:41.442026 containerd[1719]: time="2026-04-24T23:59:41.441811238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:41.442026 containerd[1719]: time="2026-04-24T23:59:41.441940040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:41.447159 containerd[1719]: time="2026-04-24T23:59:41.445111481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:41.447159 containerd[1719]: time="2026-04-24T23:59:41.445156981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:41.447159 containerd[1719]: time="2026-04-24T23:59:41.445171582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:41.447159 containerd[1719]: time="2026-04-24T23:59:41.445248183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:41.500656 systemd[1]: Started cri-containerd-8f77ec8c027d13ce1af5aebe1c72ec499f9159a45ae003867e8fa17accb1bfc4.scope - libcontainer container 8f77ec8c027d13ce1af5aebe1c72ec499f9159a45ae003867e8fa17accb1bfc4. Apr 24 23:59:41.510200 kubelet[2908]: E0424 23:59:41.506219 2908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-3087b9d021?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="6.4s" Apr 24 23:59:41.513951 systemd[1]: Started cri-containerd-9ff605302a0e52772e623d34f67f1f22598c2ace6c5da153c1d4a75a90378947.scope - libcontainer container 9ff605302a0e52772e623d34f67f1f22598c2ace6c5da153c1d4a75a90378947. Apr 24 23:59:41.517305 systemd[1]: Started cri-containerd-f11bab4498885e17a44736d1880930bd9c5a32794c0349acb3c1a9ed561bb804.scope - libcontainer container f11bab4498885e17a44736d1880930bd9c5a32794c0349acb3c1a9ed561bb804. Apr 24 23:59:41.578820 containerd[1719]: time="2026-04-24T23:59:41.578689418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-3087b9d021,Uid:5cabb0160f16c489ea29538b45990858,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ff605302a0e52772e623d34f67f1f22598c2ace6c5da153c1d4a75a90378947\"" Apr 24 23:59:41.601192 containerd[1719]: time="2026-04-24T23:59:41.600979708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-3087b9d021,Uid:5f0e9ee66d919a2e8ab7a1171b97ac60,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f77ec8c027d13ce1af5aebe1c72ec499f9159a45ae003867e8fa17accb1bfc4\"" Apr 24 23:59:41.604425 containerd[1719]: time="2026-04-24T23:59:41.604180550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-3087b9d021,Uid:06163201dca1cb5337deff7d0e623aac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f11bab4498885e17a44736d1880930bd9c5a32794c0349acb3c1a9ed561bb804\"" Apr 24 23:59:41.652293 kubelet[2908]: E0424 23:59:41.652254 2908 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:59:41.734014 kubelet[2908]: I0424 23:59:41.733987 2908 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:41.734317 kubelet[2908]: E0424 23:59:41.734290 2908 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:42.710527 kubelet[2908]: E0424 23:59:42.710480 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:59:43.513859 kubelet[2908]: E0424 23:59:43.066291 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-3087b9d021&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:59:43.516237 containerd[1719]: time="2026-04-24T23:59:43.516191322Z" level=info msg="CreateContainer within sandbox \"f11bab4498885e17a44736d1880930bd9c5a32794c0349acb3c1a9ed561bb804\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:59:43.672020 containerd[1719]: time="2026-04-24T23:59:43.671661944Z" level=info msg="CreateContainer within sandbox \"8f77ec8c027d13ce1af5aebe1c72ec499f9159a45ae003867e8fa17accb1bfc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:59:43.721493 containerd[1719]: time="2026-04-24T23:59:43.721282490Z" level=info msg="CreateContainer within sandbox \"9ff605302a0e52772e623d34f67f1f22598c2ace6c5da153c1d4a75a90378947\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:59:44.166077 kubelet[2908]: E0424 23:59:44.166026 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:59:44.171476 kubelet[2908]: E0424 23:59:44.171433 2908 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:59:44.425748 containerd[1719]: time="2026-04-24T23:59:44.425381449Z" level=info msg="CreateContainer within sandbox \"f11bab4498885e17a44736d1880930bd9c5a32794c0349acb3c1a9ed561bb804\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b8bfcbf98970f82e4f71c7c6b453826962b4120bebc69a8c720513b21194fcf\"" Apr 24 23:59:44.426280 containerd[1719]: time="2026-04-24T23:59:44.426242660Z" level=info msg="StartContainer for \"6b8bfcbf98970f82e4f71c7c6b453826962b4120bebc69a8c720513b21194fcf\"" Apr 24 23:59:44.455569 systemd[1]: Started cri-containerd-6b8bfcbf98970f82e4f71c7c6b453826962b4120bebc69a8c720513b21194fcf.scope - libcontainer container 6b8bfcbf98970f82e4f71c7c6b453826962b4120bebc69a8c720513b21194fcf. Apr 24 23:59:44.558773 containerd[1719]: time="2026-04-24T23:59:44.558458580Z" level=info msg="CreateContainer within sandbox \"8f77ec8c027d13ce1af5aebe1c72ec499f9159a45ae003867e8fa17accb1bfc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be621f372cb816e3e3818fb3aef7601d89e098fcc68616ad75db8ff26e47f6de\"" Apr 24 23:59:44.558773 containerd[1719]: time="2026-04-24T23:59:44.558577181Z" level=info msg="StartContainer for \"6b8bfcbf98970f82e4f71c7c6b453826962b4120bebc69a8c720513b21194fcf\" returns successfully" Apr 24 23:59:44.560270 containerd[1719]: time="2026-04-24T23:59:44.560015900Z" level=info msg="StartContainer for \"be621f372cb816e3e3818fb3aef7601d89e098fcc68616ad75db8ff26e47f6de\"" Apr 24 23:59:44.563618 containerd[1719]: time="2026-04-24T23:59:44.563584446Z" level=info msg="CreateContainer within sandbox \"9ff605302a0e52772e623d34f67f1f22598c2ace6c5da153c1d4a75a90378947\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a57f909e5f4fc35f5a5c6490ec5d6c1efee4a4d5b3432f84711dae0984e5672c\"" Apr 24 23:59:44.564053 containerd[1719]: time="2026-04-24T23:59:44.564024352Z" level=info msg="StartContainer for \"a57f909e5f4fc35f5a5c6490ec5d6c1efee4a4d5b3432f84711dae0984e5672c\"" Apr 24 23:59:44.610082 systemd[1]: Started cri-containerd-be621f372cb816e3e3818fb3aef7601d89e098fcc68616ad75db8ff26e47f6de.scope - libcontainer container be621f372cb816e3e3818fb3aef7601d89e098fcc68616ad75db8ff26e47f6de. Apr 24 23:59:44.617565 systemd[1]: Started cri-containerd-a57f909e5f4fc35f5a5c6490ec5d6c1efee4a4d5b3432f84711dae0984e5672c.scope - libcontainer container a57f909e5f4fc35f5a5c6490ec5d6c1efee4a4d5b3432f84711dae0984e5672c. Apr 24 23:59:44.693375 containerd[1719]: time="2026-04-24T23:59:44.693258833Z" level=info msg="StartContainer for \"be621f372cb816e3e3818fb3aef7601d89e098fcc68616ad75db8ff26e47f6de\" returns successfully" Apr 24 23:59:44.729946 containerd[1719]: time="2026-04-24T23:59:44.729899610Z" level=info msg="StartContainer for \"a57f909e5f4fc35f5a5c6490ec5d6c1efee4a4d5b3432f84711dae0984e5672c\" returns successfully" Apr 24 23:59:45.371788 kubelet[2908]: E0424 23:59:45.371751 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:45.374430 kubelet[2908]: E0424 23:59:45.372590 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:45.376729 kubelet[2908]: E0424 23:59:45.376702 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:45.418544 kubelet[2908]: E0424 23:59:45.418444 2908 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:46.348342 kubelet[2908]: E0424 23:59:46.348297 2908 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-3087b9d021" not found Apr 24 23:59:46.379445 kubelet[2908]: E0424 23:59:46.379153 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:46.379445 kubelet[2908]: E0424 23:59:46.379160 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:46.379935 kubelet[2908]: E0424 23:59:46.379585 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:46.762586 kubelet[2908]: E0424 23:59:46.762541 2908 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-3087b9d021" not found Apr 24 23:59:47.362965 kubelet[2908]: E0424 23:59:47.362921 2908 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-3087b9d021" not found Apr 24 23:59:47.385483 kubelet[2908]: E0424 23:59:47.383491 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:47.385483 kubelet[2908]: E0424 23:59:47.383589 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:47.882325 kubelet[2908]: E0424 23:59:47.882293 2908 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:47.913563 kubelet[2908]: E0424 23:59:47.913521 2908 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-3087b9d021\" not found" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:48.137525 kubelet[2908]: I0424 23:59:48.137386 2908 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:48.150309 kubelet[2908]: I0424 23:59:48.150099 2908 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:48.150309 kubelet[2908]: E0424 23:59:48.150141 2908 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-3087b9d021\": node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.160646 kubelet[2908]: E0424 23:59:48.160603 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.261470 kubelet[2908]: E0424 23:59:48.261421 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.339389 systemd[1]: Reloading requested from client PID 3191 ('systemctl') (unit session-9.scope)... Apr 24 23:59:48.339450 systemd[1]: Reloading... Apr 24 23:59:48.362456 kubelet[2908]: E0424 23:59:48.362371 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.433438 zram_generator::config[3230]: No configuration found. Apr 24 23:59:48.464275 kubelet[2908]: E0424 23:59:48.463686 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.564450 kubelet[2908]: E0424 23:59:48.564390 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.567253 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:59:48.665293 kubelet[2908]: E0424 23:59:48.665249 2908 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.680129 systemd[1]: Reloading finished in 340 ms. Apr 24 23:59:48.723377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:48.725433 kubelet[2908]: I0424 23:59:48.724207 2908 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:59:48.743022 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:59:48.743399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:48.743488 systemd[1]: kubelet.service: Consumed 1.127s CPU time, 124.7M memory peak, 0B memory swap peak. Apr 24 23:59:48.748755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:48.869945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:48.881505 (kubelet)[3298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:59:48.924958 kubelet[3298]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:59:48.924958 kubelet[3298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:59:48.925396 kubelet[3298]: I0424 23:59:48.925017 3298 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:59:48.930586 kubelet[3298]: I0424 23:59:48.930556 3298 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 23:59:48.930586 kubelet[3298]: I0424 23:59:48.930579 3298 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:59:48.930743 kubelet[3298]: I0424 23:59:48.930603 3298 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 23:59:48.930743 kubelet[3298]: I0424 23:59:48.930614 3298 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:59:48.931129 kubelet[3298]: I0424 23:59:48.931088 3298 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:59:48.934010 kubelet[3298]: I0424 23:59:48.933758 3298 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:59:48.936032 kubelet[3298]: I0424 23:59:48.936006 3298 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:59:48.939009 kubelet[3298]: E0424 23:59:48.938979 3298 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:59:48.939098 kubelet[3298]: I0424 23:59:48.939023 3298 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 24 23:59:48.942540 kubelet[3298]: I0424 23:59:48.942507 3298 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 23:59:48.942769 kubelet[3298]: I0424 23:59:48.942716 3298 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:59:48.942928 kubelet[3298]: I0424 23:59:48.942774 3298 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-3087b9d021","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:59:48.943072 kubelet[3298]: I0424 23:59:48.942929 3298 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:59:48.943072 kubelet[3298]: I0424 23:59:48.942943 3298 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 23:59:48.943072 kubelet[3298]: I0424 23:59:48.942971 3298 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 23:59:48.943192 kubelet[3298]: I0424 23:59:48.943169 3298 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:48.945897 kubelet[3298]: I0424 23:59:48.943327 3298 kubelet.go:475] "Attempting to sync node with API server" Apr 24 23:59:48.945897 kubelet[3298]: I0424 23:59:48.943367 3298 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:59:48.945897 kubelet[3298]: I0424 23:59:48.943393 3298 kubelet.go:387] "Adding apiserver pod source" Apr 24 23:59:48.945897 kubelet[3298]: I0424 23:59:48.943415 3298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:59:48.946127 kubelet[3298]: I0424 23:59:48.945398 3298 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:59:48.946894 kubelet[3298]: I0424 23:59:48.946876 3298 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:59:48.947008 kubelet[3298]: I0424 23:59:48.946999 3298 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 23:59:48.950006 kubelet[3298]: I0424 23:59:48.949989 3298 server.go:1262] "Started kubelet" Apr 24 23:59:48.952392 kubelet[3298]: I0424 23:59:48.952379 3298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:59:48.961433 kubelet[3298]: I0424 23:59:48.960939 3298 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:59:48.966846 kubelet[3298]: I0424 23:59:48.966806 3298 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:59:48.966932 kubelet[3298]: I0424 23:59:48.966899 3298 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 23:59:48.967218 kubelet[3298]: I0424 23:59:48.967196 3298 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:59:48.978491 kubelet[3298]: I0424 23:59:48.978024 3298 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:59:48.981126 kubelet[3298]: I0424 23:59:48.981022 3298 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 23:59:48.981344 kubelet[3298]: E0424 23:59:48.981301 3298 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-3087b9d021\" not found" Apr 24 23:59:48.981756 kubelet[3298]: I0424 23:59:48.981626 3298 server.go:310] "Adding debug handlers to kubelet server" Apr 24 23:59:48.981756 kubelet[3298]: I0424 23:59:48.981625 3298 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 23:59:48.983854 kubelet[3298]: I0424 23:59:48.983539 3298 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 23:59:48.983854 kubelet[3298]: I0424 23:59:48.983568 3298 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 23:59:48.983854 kubelet[3298]: I0424 23:59:48.983596 3298 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 23:59:48.983854 kubelet[3298]: E0424 23:59:48.983642 3298 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:59:48.984143 kubelet[3298]: I0424 23:59:48.984126 3298 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 23:59:48.984284 kubelet[3298]: I0424 23:59:48.984271 3298 reconciler.go:29] "Reconciler: start to sync state" Apr 24 23:59:48.995622 kubelet[3298]: I0424 23:59:48.995596 3298 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:59:48.995744 kubelet[3298]: I0424 23:59:48.995717 3298 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:59:48.999944 kubelet[3298]: I0424 23:59:48.998265 3298 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:59:49.005095 kubelet[3298]: E0424 23:59:49.005072 3298 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.051878 3298 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.051898 3298 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.051931 3298 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.052065 3298 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.052077 3298 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.052095 3298 policy_none.go:49] "None policy: Start" Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.052107 3298 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 23:59:49.052194 kubelet[3298]: I0424 23:59:49.052119 3298 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 23:59:49.052642 kubelet[3298]: I0424 23:59:49.052215 3298 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 23:59:49.052642 kubelet[3298]: I0424 23:59:49.052226 3298 policy_none.go:47] "Start" Apr 24 23:59:49.057735 kubelet[3298]: E0424 23:59:49.057715 3298 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:59:49.058797 kubelet[3298]: I0424 23:59:49.058025 3298 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:59:49.058797 kubelet[3298]: I0424 23:59:49.058041 3298 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:59:49.058797 kubelet[3298]: I0424 23:59:49.058266 3298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:59:49.060491 kubelet[3298]: E0424 23:59:49.059878 3298 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:59:49.086095 kubelet[3298]: I0424 23:59:49.085061 3298 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.086095 kubelet[3298]: I0424 23:59:49.085222 3298 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.086095 kubelet[3298]: I0424 23:59:49.085877 3298 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.097893 kubelet[3298]: I0424 23:59:49.097869 3298 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:59:49.102626 kubelet[3298]: I0424 23:59:49.102603 3298 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:59:49.103045 kubelet[3298]: I0424 23:59:49.102845 3298 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:59:49.161012 kubelet[3298]: I0424 23:59:49.160702 3298 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.177830 kubelet[3298]: I0424 23:59:49.177800 3298 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.178071 kubelet[3298]: I0424 23:59:49.178056 3298 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.184645 kubelet[3298]: I0424 23:59:49.184483 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06163201dca1cb5337deff7d0e623aac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-3087b9d021\" (UID: \"06163201dca1cb5337deff7d0e623aac\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.184645 kubelet[3298]: I0424 23:59:49.184520 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f0e9ee66d919a2e8ab7a1171b97ac60-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-3087b9d021\" (UID: \"5f0e9ee66d919a2e8ab7a1171b97ac60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.184645 kubelet[3298]: I0424 23:59:49.184570 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f0e9ee66d919a2e8ab7a1171b97ac60-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-3087b9d021\" (UID: \"5f0e9ee66d919a2e8ab7a1171b97ac60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.184645 kubelet[3298]: I0424 23:59:49.184622 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f0e9ee66d919a2e8ab7a1171b97ac60-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-3087b9d021\" (UID: \"5f0e9ee66d919a2e8ab7a1171b97ac60\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.184896 kubelet[3298]: I0424 23:59:49.184655 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.184896 kubelet[3298]: I0424 23:59:49.184714 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.285723 kubelet[3298]: I0424 23:59:49.285434 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.285723 kubelet[3298]: I0424 23:59:49.285485 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.285723 kubelet[3298]: I0424 23:59:49.285509 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cabb0160f16c489ea29538b45990858-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-3087b9d021\" (UID: \"5cabb0160f16c489ea29538b45990858\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:49.946079 kubelet[3298]: I0424 23:59:49.945639 3298 apiserver.go:52] "Watching apiserver" Apr 24 23:59:49.984592 kubelet[3298]: I0424 23:59:49.984552 3298 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 23:59:50.032623 kubelet[3298]: I0424 23:59:50.030601 3298 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:50.032623 kubelet[3298]: I0424 23:59:50.030995 3298 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:50.046890 kubelet[3298]: I0424 23:59:50.046864 3298 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:59:50.047110 kubelet[3298]: E0424 23:59:50.047084 3298 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-3087b9d021\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:50.049329 kubelet[3298]: I0424 23:59:50.049309 3298 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 24 23:59:50.049611 kubelet[3298]: E0424 23:59:50.049492 3298 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-3087b9d021\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-3087b9d021" Apr 24 23:59:50.075802 kubelet[3298]: I0424 23:59:50.075199 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-3087b9d021" podStartSLOduration=1.075180623 podStartE2EDuration="1.075180623s" podCreationTimestamp="2026-04-24 23:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:59:50.06263656 +0000 UTC m=+1.176226138" watchObservedRunningTime="2026-04-24 23:59:50.075180623 +0000 UTC m=+1.188770201" Apr 24 23:59:50.076365 kubelet[3298]: I0424 23:59:50.076219 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-3087b9d021" podStartSLOduration=1.076205936 podStartE2EDuration="1.076205936s" podCreationTimestamp="2026-04-24 23:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:59:50.074582915 +0000 UTC m=+1.188172593" watchObservedRunningTime="2026-04-24 23:59:50.076205936 +0000 UTC m=+1.189795614" Apr 24 23:59:53.222553 kubelet[3298]: I0424 23:59:53.222518 3298 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:59:53.223507 kubelet[3298]: I0424 23:59:53.223152 3298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:59:53.223610 containerd[1719]: time="2026-04-24T23:59:53.222946700Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:59:54.154225 kubelet[3298]: I0424 23:59:54.152890 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-3087b9d021" podStartSLOduration=5.152869547 podStartE2EDuration="5.152869547s" podCreationTimestamp="2026-04-24 23:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:59:50.084738847 +0000 UTC m=+1.198328525" watchObservedRunningTime="2026-04-24 23:59:54.152869547 +0000 UTC m=+5.266459225" Apr 24 23:59:54.174907 systemd[1]: Created slice kubepods-besteffort-pod805bc1d8_de9c_4f71_9ddc_392e3fbd5c1b.slice - libcontainer container kubepods-besteffort-pod805bc1d8_de9c_4f71_9ddc_392e3fbd5c1b.slice. Apr 24 23:59:54.215720 kubelet[3298]: I0424 23:59:54.215521 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b-kube-proxy\") pod \"kube-proxy-7c4jg\" (UID: \"805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b\") " pod="kube-system/kube-proxy-7c4jg" Apr 24 23:59:54.215720 kubelet[3298]: I0424 23:59:54.215569 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b-lib-modules\") pod \"kube-proxy-7c4jg\" (UID: \"805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b\") " pod="kube-system/kube-proxy-7c4jg" Apr 24 23:59:54.215720 kubelet[3298]: I0424 23:59:54.215597 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b-xtables-lock\") pod \"kube-proxy-7c4jg\" (UID: \"805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b\") " pod="kube-system/kube-proxy-7c4jg" Apr 24 23:59:54.215720 kubelet[3298]: I0424 23:59:54.215619 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6pn\" (UniqueName: \"kubernetes.io/projected/805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b-kube-api-access-xz6pn\") pod \"kube-proxy-7c4jg\" (UID: \"805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b\") " pod="kube-system/kube-proxy-7c4jg" Apr 24 23:59:54.348197 systemd[1]: Created slice kubepods-besteffort-podd951c44b_7c08_4efb_8bf4_6e9213039d05.slice - libcontainer container kubepods-besteffort-podd951c44b_7c08_4efb_8bf4_6e9213039d05.slice. Apr 24 23:59:54.417500 kubelet[3298]: I0424 23:59:54.417427 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77dq\" (UniqueName: \"kubernetes.io/projected/d951c44b-7c08-4efb-8bf4-6e9213039d05-kube-api-access-x77dq\") pod \"tigera-operator-5588576f44-j7fqw\" (UID: \"d951c44b-7c08-4efb-8bf4-6e9213039d05\") " pod="tigera-operator/tigera-operator-5588576f44-j7fqw" Apr 24 23:59:54.417500 kubelet[3298]: I0424 23:59:54.417490 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d951c44b-7c08-4efb-8bf4-6e9213039d05-var-lib-calico\") pod \"tigera-operator-5588576f44-j7fqw\" (UID: \"d951c44b-7c08-4efb-8bf4-6e9213039d05\") " pod="tigera-operator/tigera-operator-5588576f44-j7fqw" Apr 24 23:59:54.488838 containerd[1719]: time="2026-04-24T23:59:54.488420493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c4jg,Uid:805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b,Namespace:kube-system,Attempt:0,}" Apr 24 23:59:54.537844 containerd[1719]: time="2026-04-24T23:59:54.537631131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:54.537844 containerd[1719]: time="2026-04-24T23:59:54.537688732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:54.537844 containerd[1719]: time="2026-04-24T23:59:54.537709732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:54.538194 containerd[1719]: time="2026-04-24T23:59:54.537915335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:54.566701 systemd[1]: Started cri-containerd-954db3f50692508705e6531e85621fb1538812ba10228843738fbd8a144a018c.scope - libcontainer container 954db3f50692508705e6531e85621fb1538812ba10228843738fbd8a144a018c. Apr 24 23:59:54.589439 containerd[1719]: time="2026-04-24T23:59:54.589364601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c4jg,Uid:805bc1d8-de9c-4f71-9ddc-392e3fbd5c1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"954db3f50692508705e6531e85621fb1538812ba10228843738fbd8a144a018c\"" Apr 24 23:59:54.599512 containerd[1719]: time="2026-04-24T23:59:54.599470032Z" level=info msg="CreateContainer within sandbox \"954db3f50692508705e6531e85621fb1538812ba10228843738fbd8a144a018c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:59:54.634374 containerd[1719]: time="2026-04-24T23:59:54.634336484Z" level=info msg="CreateContainer within sandbox \"954db3f50692508705e6531e85621fb1538812ba10228843738fbd8a144a018c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04ad4b850cc8fc9f94f3369cf4a7554329b4b8edb5db84242e6b5d81f01bdc94\"" Apr 24 23:59:54.634916 containerd[1719]: time="2026-04-24T23:59:54.634884491Z" level=info msg="StartContainer for \"04ad4b850cc8fc9f94f3369cf4a7554329b4b8edb5db84242e6b5d81f01bdc94\"" Apr 24 23:59:54.659376 containerd[1719]: time="2026-04-24T23:59:54.658959103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-j7fqw,Uid:d951c44b-7c08-4efb-8bf4-6e9213039d05,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:59:54.662608 systemd[1]: Started cri-containerd-04ad4b850cc8fc9f94f3369cf4a7554329b4b8edb5db84242e6b5d81f01bdc94.scope - libcontainer container 04ad4b850cc8fc9f94f3369cf4a7554329b4b8edb5db84242e6b5d81f01bdc94. Apr 24 23:59:54.700417 containerd[1719]: time="2026-04-24T23:59:54.700281438Z" level=info msg="StartContainer for \"04ad4b850cc8fc9f94f3369cf4a7554329b4b8edb5db84242e6b5d81f01bdc94\" returns successfully" Apr 24 23:59:54.718436 containerd[1719]: time="2026-04-24T23:59:54.717118356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:54.718754 containerd[1719]: time="2026-04-24T23:59:54.718674176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:54.718932 containerd[1719]: time="2026-04-24T23:59:54.718740377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:54.719839 containerd[1719]: time="2026-04-24T23:59:54.719779991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:54.741789 systemd[1]: Started cri-containerd-5f6cfe0fdf9f0aa9b96585dc26a64349e37066d060cf23e4ae272a9522e86c56.scope - libcontainer container 5f6cfe0fdf9f0aa9b96585dc26a64349e37066d060cf23e4ae272a9522e86c56. Apr 24 23:59:54.797064 containerd[1719]: time="2026-04-24T23:59:54.796697787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-j7fqw,Uid:d951c44b-7c08-4efb-8bf4-6e9213039d05,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5f6cfe0fdf9f0aa9b96585dc26a64349e37066d060cf23e4ae272a9522e86c56\"" Apr 24 23:59:54.799472 containerd[1719]: time="2026-04-24T23:59:54.798875515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:59:55.982449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297254252.mount: Deactivated successfully. Apr 24 23:59:57.516941 containerd[1719]: time="2026-04-24T23:59:57.516891062Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:57.519384 containerd[1719]: time="2026-04-24T23:59:57.519227293Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:59:57.522504 containerd[1719]: time="2026-04-24T23:59:57.522448835Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:57.527731 containerd[1719]: time="2026-04-24T23:59:57.527208298Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:57.528494 containerd[1719]: time="2026-04-24T23:59:57.528307812Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.729393896s" Apr 24 23:59:57.528494 containerd[1719]: time="2026-04-24T23:59:57.528344813Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:59:57.536196 containerd[1719]: time="2026-04-24T23:59:57.536150515Z" level=info msg="CreateContainer within sandbox \"5f6cfe0fdf9f0aa9b96585dc26a64349e37066d060cf23e4ae272a9522e86c56\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:59:57.567002 containerd[1719]: time="2026-04-24T23:59:57.566858019Z" level=info msg="CreateContainer within sandbox \"5f6cfe0fdf9f0aa9b96585dc26a64349e37066d060cf23e4ae272a9522e86c56\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab1599cd060543525ffddd663ff63e02275641a653e61ae15e4f66ebb79b02be\"" Apr 24 23:59:57.568892 containerd[1719]: time="2026-04-24T23:59:57.567479728Z" level=info msg="StartContainer for \"ab1599cd060543525ffddd663ff63e02275641a653e61ae15e4f66ebb79b02be\"" Apr 24 23:59:57.602543 systemd[1]: Started cri-containerd-ab1599cd060543525ffddd663ff63e02275641a653e61ae15e4f66ebb79b02be.scope - libcontainer container ab1599cd060543525ffddd663ff63e02275641a653e61ae15e4f66ebb79b02be. Apr 24 23:59:57.629972 containerd[1719]: time="2026-04-24T23:59:57.629928849Z" level=info msg="StartContainer for \"ab1599cd060543525ffddd663ff63e02275641a653e61ae15e4f66ebb79b02be\" returns successfully" Apr 24 23:59:58.065426 kubelet[3298]: I0424 23:59:58.064849 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7c4jg" podStartSLOduration=4.064830772 podStartE2EDuration="4.064830772s" podCreationTimestamp="2026-04-24 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:59:55.058183074 +0000 UTC m=+6.171772752" watchObservedRunningTime="2026-04-24 23:59:58.064830772 +0000 UTC m=+9.178420450" Apr 24 23:59:59.266209 kubelet[3298]: I0424 23:59:59.266133 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-j7fqw" podStartSLOduration=2.535154062 podStartE2EDuration="5.266109079s" podCreationTimestamp="2026-04-24 23:59:54 +0000 UTC" firstStartedPulling="2026-04-24 23:59:54.798352008 +0000 UTC m=+5.911941686" lastFinishedPulling="2026-04-24 23:59:57.529306925 +0000 UTC m=+8.642896703" observedRunningTime="2026-04-24 23:59:58.064824772 +0000 UTC m=+9.178414350" watchObservedRunningTime="2026-04-24 23:59:59.266109079 +0000 UTC m=+10.379698757" Apr 25 00:00:00.860744 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 25 00:00:00.879863 systemd[1]: logrotate.service: Deactivated successfully. Apr 25 00:00:04.365629 sudo[2353]: pam_unix(sudo:session): session closed for user root Apr 25 00:00:04.383620 sshd[2350]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:04.390070 systemd-logind[1702]: Session 9 logged out. Waiting for processes to exit. Apr 25 00:00:04.391310 systemd[1]: sshd@6-10.0.0.19:22-4.175.71.9:50212.service: Deactivated successfully. Apr 25 00:00:04.395254 systemd[1]: session-9.scope: Deactivated successfully. Apr 25 00:00:04.395715 systemd[1]: session-9.scope: Consumed 5.624s CPU time, 155.5M memory peak, 0B memory swap peak. Apr 25 00:00:04.397290 systemd-logind[1702]: Removed session 9. Apr 25 00:00:06.530344 systemd[1]: Created slice kubepods-besteffort-podff3f58b8_8f58_428b_b455_db84fb216309.slice - libcontainer container kubepods-besteffort-podff3f58b8_8f58_428b_b455_db84fb216309.slice. Apr 25 00:00:06.590681 kubelet[3298]: I0425 00:00:06.590550 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mr5x\" (UniqueName: \"kubernetes.io/projected/ff3f58b8-8f58-428b-b455-db84fb216309-kube-api-access-7mr5x\") pod \"calico-typha-599bf55556-zxn5c\" (UID: \"ff3f58b8-8f58-428b-b455-db84fb216309\") " pod="calico-system/calico-typha-599bf55556-zxn5c" Apr 25 00:00:06.590681 kubelet[3298]: I0425 00:00:06.590602 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ff3f58b8-8f58-428b-b455-db84fb216309-typha-certs\") pod \"calico-typha-599bf55556-zxn5c\" (UID: \"ff3f58b8-8f58-428b-b455-db84fb216309\") " pod="calico-system/calico-typha-599bf55556-zxn5c" Apr 25 00:00:06.590681 kubelet[3298]: I0425 00:00:06.590629 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff3f58b8-8f58-428b-b455-db84fb216309-tigera-ca-bundle\") pod \"calico-typha-599bf55556-zxn5c\" (UID: \"ff3f58b8-8f58-428b-b455-db84fb216309\") " pod="calico-system/calico-typha-599bf55556-zxn5c" Apr 25 00:00:06.692679 kubelet[3298]: I0425 00:00:06.691338 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-flexvol-driver-host\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692679 kubelet[3298]: I0425 00:00:06.691388 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2scj\" (UniqueName: \"kubernetes.io/projected/756392d2-f722-4953-a561-22e250df0869-kube-api-access-j2scj\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692679 kubelet[3298]: I0425 00:00:06.691429 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-cni-log-dir\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692679 kubelet[3298]: I0425 00:00:06.691473 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-var-run-calico\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692679 kubelet[3298]: I0425 00:00:06.691495 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-cni-bin-dir\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692986 kubelet[3298]: I0425 00:00:06.691516 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-lib-modules\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692986 kubelet[3298]: I0425 00:00:06.691537 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/756392d2-f722-4953-a561-22e250df0869-node-certs\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692986 kubelet[3298]: I0425 00:00:06.691557 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-cni-net-dir\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692986 kubelet[3298]: I0425 00:00:06.691579 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-nodeproc\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692986 kubelet[3298]: I0425 00:00:06.691603 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-sys-fs\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.692986 kubelet[3298]: I0425 00:00:06.691623 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-policysync\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.693218 kubelet[3298]: I0425 00:00:06.691693 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-bpffs\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.693218 kubelet[3298]: I0425 00:00:06.691720 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/756392d2-f722-4953-a561-22e250df0869-tigera-ca-bundle\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.693218 kubelet[3298]: I0425 00:00:06.691742 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-xtables-lock\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.693218 kubelet[3298]: I0425 00:00:06.691779 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/756392d2-f722-4953-a561-22e250df0869-var-lib-calico\") pod \"calico-node-6qjvt\" (UID: \"756392d2-f722-4953-a561-22e250df0869\") " pod="calico-system/calico-node-6qjvt" Apr 25 00:00:06.716898 systemd[1]: Created slice kubepods-besteffort-pod756392d2_f722_4953_a561_22e250df0869.slice - libcontainer container kubepods-besteffort-pod756392d2_f722_4953_a561_22e250df0869.slice. Apr 25 00:00:06.797687 kubelet[3298]: E0425 00:00:06.797532 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.797687 kubelet[3298]: W0425 00:00:06.797561 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.798521 kubelet[3298]: E0425 00:00:06.797934 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.803565 kubelet[3298]: E0425 00:00:06.803322 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.803565 kubelet[3298]: W0425 00:00:06.803346 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.803565 kubelet[3298]: E0425 00:00:06.803369 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.805447 kubelet[3298]: E0425 00:00:06.805142 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.805447 kubelet[3298]: W0425 00:00:06.805159 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.805447 kubelet[3298]: E0425 00:00:06.805185 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.806600 kubelet[3298]: E0425 00:00:06.806078 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.806600 kubelet[3298]: W0425 00:00:06.806094 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.806600 kubelet[3298]: E0425 00:00:06.806110 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.810441 kubelet[3298]: E0425 00:00:06.807977 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.810441 kubelet[3298]: W0425 00:00:06.807994 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.810441 kubelet[3298]: E0425 00:00:06.808007 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.810441 kubelet[3298]: E0425 00:00:06.808205 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.810441 kubelet[3298]: W0425 00:00:06.808215 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.810441 kubelet[3298]: E0425 00:00:06.808228 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.810441 kubelet[3298]: E0425 00:00:06.808478 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.810441 kubelet[3298]: W0425 00:00:06.808489 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.810441 kubelet[3298]: E0425 00:00:06.808501 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.810441 kubelet[3298]: E0425 00:00:06.808683 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.810955 kubelet[3298]: W0425 00:00:06.808694 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.810955 kubelet[3298]: E0425 00:00:06.808706 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.810955 kubelet[3298]: E0425 00:00:06.808991 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.810955 kubelet[3298]: W0425 00:00:06.809004 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.810955 kubelet[3298]: E0425 00:00:06.809017 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.810955 kubelet[3298]: E0425 00:00:06.809262 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.810955 kubelet[3298]: W0425 00:00:06.809273 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.810955 kubelet[3298]: E0425 00:00:06.809286 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.816672 kubelet[3298]: E0425 00:00:06.816573 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:06.837559 kubelet[3298]: E0425 00:00:06.836741 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.837559 kubelet[3298]: W0425 00:00:06.836764 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.837559 kubelet[3298]: E0425 00:00:06.836783 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.845909 containerd[1719]: time="2026-04-25T00:00:06.845562657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599bf55556-zxn5c,Uid:ff3f58b8-8f58-428b-b455-db84fb216309,Namespace:calico-system,Attempt:0,}" Apr 25 00:00:06.893159 kubelet[3298]: E0425 00:00:06.891755 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.893159 kubelet[3298]: W0425 00:00:06.891784 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.893159 kubelet[3298]: E0425 00:00:06.891831 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.893159 kubelet[3298]: E0425 00:00:06.892073 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.893159 kubelet[3298]: W0425 00:00:06.892086 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.893159 kubelet[3298]: E0425 00:00:06.892098 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.897005 kubelet[3298]: E0425 00:00:06.893646 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.897005 kubelet[3298]: W0425 00:00:06.893666 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.897005 kubelet[3298]: E0425 00:00:06.893705 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.897005 kubelet[3298]: E0425 00:00:06.896601 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.897005 kubelet[3298]: W0425 00:00:06.896617 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.897005 kubelet[3298]: E0425 00:00:06.896634 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.897005 kubelet[3298]: E0425 00:00:06.896926 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.897005 kubelet[3298]: W0425 00:00:06.896938 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.897005 kubelet[3298]: E0425 00:00:06.896952 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.897925 kubelet[3298]: E0425 00:00:06.897714 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.897925 kubelet[3298]: W0425 00:00:06.897733 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.897925 kubelet[3298]: E0425 00:00:06.897748 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.898228 kubelet[3298]: E0425 00:00:06.898149 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.898228 kubelet[3298]: W0425 00:00:06.898163 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.898228 kubelet[3298]: E0425 00:00:06.898177 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.898587 kubelet[3298]: E0425 00:00:06.898571 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.898724 kubelet[3298]: W0425 00:00:06.898673 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.898724 kubelet[3298]: E0425 00:00:06.898694 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.899472 kubelet[3298]: E0425 00:00:06.899455 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.899664 kubelet[3298]: W0425 00:00:06.899560 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.899664 kubelet[3298]: E0425 00:00:06.899579 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.900028 kubelet[3298]: E0425 00:00:06.899912 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.900028 kubelet[3298]: W0425 00:00:06.899926 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.900028 kubelet[3298]: E0425 00:00:06.899940 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.901568 kubelet[3298]: E0425 00:00:06.901484 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.901568 kubelet[3298]: W0425 00:00:06.901501 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.901568 kubelet[3298]: E0425 00:00:06.901515 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.903634 kubelet[3298]: E0425 00:00:06.903457 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.903634 kubelet[3298]: W0425 00:00:06.903475 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.903634 kubelet[3298]: E0425 00:00:06.903489 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.903981 kubelet[3298]: E0425 00:00:06.903853 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.903981 kubelet[3298]: W0425 00:00:06.903866 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.903981 kubelet[3298]: E0425 00:00:06.903880 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.905086 kubelet[3298]: E0425 00:00:06.904933 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.905086 kubelet[3298]: W0425 00:00:06.904950 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.905086 kubelet[3298]: E0425 00:00:06.904964 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.905486 kubelet[3298]: E0425 00:00:06.905188 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.905486 kubelet[3298]: W0425 00:00:06.905200 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.905486 kubelet[3298]: E0425 00:00:06.905213 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.905800 kubelet[3298]: E0425 00:00:06.905656 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.905800 kubelet[3298]: W0425 00:00:06.905670 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.905800 kubelet[3298]: E0425 00:00:06.905683 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.906634 kubelet[3298]: E0425 00:00:06.906618 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.906731 kubelet[3298]: W0425 00:00:06.906718 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.906818 kubelet[3298]: E0425 00:00:06.906805 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.907119 kubelet[3298]: E0425 00:00:06.907104 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.907349 kubelet[3298]: W0425 00:00:06.907317 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.907437 kubelet[3298]: E0425 00:00:06.907356 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.907786 kubelet[3298]: E0425 00:00:06.907751 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.907786 kubelet[3298]: W0425 00:00:06.907770 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.907907 kubelet[3298]: E0425 00:00:06.907805 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.908125 kubelet[3298]: E0425 00:00:06.908103 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.908188 kubelet[3298]: W0425 00:00:06.908143 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.908188 kubelet[3298]: E0425 00:00:06.908160 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.908633 kubelet[3298]: E0425 00:00:06.908609 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.908633 kubelet[3298]: W0425 00:00:06.908630 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.908736 kubelet[3298]: E0425 00:00:06.908645 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.908940 kubelet[3298]: I0425 00:00:06.908781 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d30beaa3-b48e-4165-93c6-fa00a976739a-kubelet-dir\") pod \"csi-node-driver-tz7wk\" (UID: \"d30beaa3-b48e-4165-93c6-fa00a976739a\") " pod="calico-system/csi-node-driver-tz7wk" Apr 25 00:00:06.909128 kubelet[3298]: E0425 00:00:06.909082 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.909128 kubelet[3298]: W0425 00:00:06.909103 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.909128 kubelet[3298]: E0425 00:00:06.909127 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.909603 kubelet[3298]: E0425 00:00:06.909427 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.909603 kubelet[3298]: W0425 00:00:06.909447 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.909603 kubelet[3298]: E0425 00:00:06.909461 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.910604 kubelet[3298]: E0425 00:00:06.909765 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.910604 kubelet[3298]: W0425 00:00:06.909779 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.910604 kubelet[3298]: E0425 00:00:06.909794 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.910604 kubelet[3298]: I0425 00:00:06.909824 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tch\" (UniqueName: \"kubernetes.io/projected/d30beaa3-b48e-4165-93c6-fa00a976739a-kube-api-access-k8tch\") pod \"csi-node-driver-tz7wk\" (UID: \"d30beaa3-b48e-4165-93c6-fa00a976739a\") " pod="calico-system/csi-node-driver-tz7wk" Apr 25 00:00:06.910604 kubelet[3298]: E0425 00:00:06.910075 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.910604 kubelet[3298]: W0425 00:00:06.910087 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.910604 kubelet[3298]: E0425 00:00:06.910099 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.910604 kubelet[3298]: I0425 00:00:06.910131 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d30beaa3-b48e-4165-93c6-fa00a976739a-varrun\") pod \"csi-node-driver-tz7wk\" (UID: \"d30beaa3-b48e-4165-93c6-fa00a976739a\") " pod="calico-system/csi-node-driver-tz7wk" Apr 25 00:00:06.910604 kubelet[3298]: E0425 00:00:06.910392 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.910976 kubelet[3298]: W0425 00:00:06.910419 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.910976 kubelet[3298]: E0425 00:00:06.910552 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.910976 kubelet[3298]: I0425 00:00:06.910620 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d30beaa3-b48e-4165-93c6-fa00a976739a-registration-dir\") pod \"csi-node-driver-tz7wk\" (UID: \"d30beaa3-b48e-4165-93c6-fa00a976739a\") " pod="calico-system/csi-node-driver-tz7wk" Apr 25 00:00:06.910976 kubelet[3298]: E0425 00:00:06.910888 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.910976 kubelet[3298]: W0425 00:00:06.910901 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.910976 kubelet[3298]: E0425 00:00:06.910916 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.910976 kubelet[3298]: I0425 00:00:06.910947 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d30beaa3-b48e-4165-93c6-fa00a976739a-socket-dir\") pod \"csi-node-driver-tz7wk\" (UID: \"d30beaa3-b48e-4165-93c6-fa00a976739a\") " pod="calico-system/csi-node-driver-tz7wk" Apr 25 00:00:06.911284 kubelet[3298]: E0425 00:00:06.911212 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.911284 kubelet[3298]: W0425 00:00:06.911226 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.911284 kubelet[3298]: E0425 00:00:06.911241 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.911502 kubelet[3298]: E0425 00:00:06.911480 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.911502 kubelet[3298]: W0425 00:00:06.911498 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.911891 kubelet[3298]: E0425 00:00:06.911512 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.912100 containerd[1719]: time="2026-04-25T00:00:06.911755227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:06.912248 containerd[1719]: time="2026-04-25T00:00:06.912218433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:06.912359 containerd[1719]: time="2026-04-25T00:00:06.912332434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:06.912850 containerd[1719]: time="2026-04-25T00:00:06.912684839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:06.913692 kubelet[3298]: E0425 00:00:06.913662 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.913692 kubelet[3298]: W0425 00:00:06.913682 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.913824 kubelet[3298]: E0425 00:00:06.913699 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.915428 kubelet[3298]: E0425 00:00:06.914005 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.915428 kubelet[3298]: W0425 00:00:06.914021 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.915428 kubelet[3298]: E0425 00:00:06.914035 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.915428 kubelet[3298]: E0425 00:00:06.914271 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.915428 kubelet[3298]: W0425 00:00:06.914282 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.915428 kubelet[3298]: E0425 00:00:06.914294 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.915428 kubelet[3298]: E0425 00:00:06.915118 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.915428 kubelet[3298]: W0425 00:00:06.915131 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.915428 kubelet[3298]: E0425 00:00:06.915144 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.916100 kubelet[3298]: E0425 00:00:06.916079 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.916100 kubelet[3298]: W0425 00:00:06.916099 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.916231 kubelet[3298]: E0425 00:00:06.916114 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.916377 kubelet[3298]: E0425 00:00:06.916359 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:06.916377 kubelet[3298]: W0425 00:00:06.916377 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:06.916480 kubelet[3298]: E0425 00:00:06.916391 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:06.941606 systemd[1]: Started cri-containerd-ca211c4265e1c0f54737a0d64caad06ee39c106b0cda2d7a50faa7b4203d5e1d.scope - libcontainer container ca211c4265e1c0f54737a0d64caad06ee39c106b0cda2d7a50faa7b4203d5e1d. Apr 25 00:00:06.990547 containerd[1719]: time="2026-04-25T00:00:06.990508261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599bf55556-zxn5c,Uid:ff3f58b8-8f58-428b-b455-db84fb216309,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca211c4265e1c0f54737a0d64caad06ee39c106b0cda2d7a50faa7b4203d5e1d\"" Apr 25 00:00:06.992677 containerd[1719]: time="2026-04-25T00:00:06.992588189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 25 00:00:07.012193 kubelet[3298]: E0425 00:00:07.012168 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.012365 kubelet[3298]: W0425 00:00:07.012295 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.012365 kubelet[3298]: E0425 00:00:07.012317 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.012712 kubelet[3298]: E0425 00:00:07.012689 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.012712 kubelet[3298]: W0425 00:00:07.012707 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.012972 kubelet[3298]: E0425 00:00:07.012724 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.012972 kubelet[3298]: E0425 00:00:07.012954 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.012972 kubelet[3298]: W0425 00:00:07.012967 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.013101 kubelet[3298]: E0425 00:00:07.012980 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.013253 kubelet[3298]: E0425 00:00:07.013232 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.013253 kubelet[3298]: W0425 00:00:07.013252 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.013376 kubelet[3298]: E0425 00:00:07.013267 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.013647 kubelet[3298]: E0425 00:00:07.013631 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.013779 kubelet[3298]: W0425 00:00:07.013722 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.013779 kubelet[3298]: E0425 00:00:07.013738 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.014010 kubelet[3298]: E0425 00:00:07.013985 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.014010 kubelet[3298]: W0425 00:00:07.014003 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.014209 kubelet[3298]: E0425 00:00:07.014016 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.014376 kubelet[3298]: E0425 00:00:07.014358 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.014376 kubelet[3298]: W0425 00:00:07.014374 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.014798 kubelet[3298]: E0425 00:00:07.014426 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.014798 kubelet[3298]: E0425 00:00:07.014695 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.014924 kubelet[3298]: W0425 00:00:07.014707 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.014986 kubelet[3298]: E0425 00:00:07.014932 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.015334 kubelet[3298]: E0425 00:00:07.015315 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.015334 kubelet[3298]: W0425 00:00:07.015331 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.015485 kubelet[3298]: E0425 00:00:07.015346 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.015692 kubelet[3298]: E0425 00:00:07.015673 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.015692 kubelet[3298]: W0425 00:00:07.015689 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.015802 kubelet[3298]: E0425 00:00:07.015704 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.016036 kubelet[3298]: E0425 00:00:07.015995 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.016118 kubelet[3298]: W0425 00:00:07.016035 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.016118 kubelet[3298]: E0425 00:00:07.016051 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.016544 kubelet[3298]: E0425 00:00:07.016526 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.016544 kubelet[3298]: W0425 00:00:07.016541 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.016815 kubelet[3298]: E0425 00:00:07.016556 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.016898 kubelet[3298]: E0425 00:00:07.016840 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.016898 kubelet[3298]: W0425 00:00:07.016852 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.016898 kubelet[3298]: E0425 00:00:07.016866 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.017182 kubelet[3298]: E0425 00:00:07.017162 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.017182 kubelet[3298]: W0425 00:00:07.017178 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.017318 kubelet[3298]: E0425 00:00:07.017192 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.017507 kubelet[3298]: E0425 00:00:07.017487 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.017507 kubelet[3298]: W0425 00:00:07.017503 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.017627 kubelet[3298]: E0425 00:00:07.017518 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.018025 kubelet[3298]: E0425 00:00:07.018004 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.018025 kubelet[3298]: W0425 00:00:07.018020 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.018148 kubelet[3298]: E0425 00:00:07.018034 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.018286 kubelet[3298]: E0425 00:00:07.018267 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.018355 kubelet[3298]: W0425 00:00:07.018287 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.018355 kubelet[3298]: E0425 00:00:07.018303 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.018612 kubelet[3298]: E0425 00:00:07.018572 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.018612 kubelet[3298]: W0425 00:00:07.018589 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.018612 kubelet[3298]: E0425 00:00:07.018603 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.018907 kubelet[3298]: E0425 00:00:07.018889 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.018907 kubelet[3298]: W0425 00:00:07.018905 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.019071 kubelet[3298]: E0425 00:00:07.018919 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.019223 kubelet[3298]: E0425 00:00:07.019202 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.019223 kubelet[3298]: W0425 00:00:07.019216 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.019339 kubelet[3298]: E0425 00:00:07.019230 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.019638 kubelet[3298]: E0425 00:00:07.019619 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.019638 kubelet[3298]: W0425 00:00:07.019634 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.019762 kubelet[3298]: E0425 00:00:07.019648 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.019909 kubelet[3298]: E0425 00:00:07.019894 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.019909 kubelet[3298]: W0425 00:00:07.019907 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.020008 kubelet[3298]: E0425 00:00:07.019920 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.020169 kubelet[3298]: E0425 00:00:07.020154 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.020169 kubelet[3298]: W0425 00:00:07.020167 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.020265 kubelet[3298]: E0425 00:00:07.020180 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.020411 kubelet[3298]: E0425 00:00:07.020389 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.020484 kubelet[3298]: W0425 00:00:07.020426 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.020484 kubelet[3298]: E0425 00:00:07.020440 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.020818 kubelet[3298]: E0425 00:00:07.020648 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.020818 kubelet[3298]: W0425 00:00:07.020659 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.020818 kubelet[3298]: E0425 00:00:07.020670 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.031192 kubelet[3298]: E0425 00:00:07.031158 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:07.031192 kubelet[3298]: W0425 00:00:07.031178 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:07.031192 kubelet[3298]: E0425 00:00:07.031193 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:07.031975 containerd[1719]: time="2026-04-25T00:00:07.031936306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qjvt,Uid:756392d2-f722-4953-a561-22e250df0869,Namespace:calico-system,Attempt:0,}" Apr 25 00:00:07.076884 containerd[1719]: time="2026-04-25T00:00:07.075492078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:07.076884 containerd[1719]: time="2026-04-25T00:00:07.075556879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:07.076884 containerd[1719]: time="2026-04-25T00:00:07.075575579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:07.076884 containerd[1719]: time="2026-04-25T00:00:07.075663580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:07.102587 systemd[1]: Started cri-containerd-01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95.scope - libcontainer container 01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95. Apr 25 00:00:07.127069 containerd[1719]: time="2026-04-25T00:00:07.127020055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6qjvt,Uid:756392d2-f722-4953-a561-22e250df0869,Namespace:calico-system,Attempt:0,} returns sandbox id \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\"" Apr 25 00:00:08.985282 kubelet[3298]: E0425 00:00:08.984443 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:10.984995 kubelet[3298]: E0425 00:00:10.984597 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:12.985876 kubelet[3298]: E0425 00:00:12.984455 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:13.375319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195158712.mount: Deactivated successfully. Apr 25 00:00:14.038370 containerd[1719]: time="2026-04-25T00:00:14.038317983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:14.041181 containerd[1719]: time="2026-04-25T00:00:14.041023418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 25 00:00:14.045123 containerd[1719]: time="2026-04-25T00:00:14.045056071Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:14.049741 containerd[1719]: time="2026-04-25T00:00:14.049633331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:14.051186 containerd[1719]: time="2026-04-25T00:00:14.050532043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 7.057903553s" Apr 25 00:00:14.051186 containerd[1719]: time="2026-04-25T00:00:14.050570043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 25 00:00:14.051845 containerd[1719]: time="2026-04-25T00:00:14.051805259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 25 00:00:14.074933 containerd[1719]: time="2026-04-25T00:00:14.074896162Z" level=info msg="CreateContainer within sandbox \"ca211c4265e1c0f54737a0d64caad06ee39c106b0cda2d7a50faa7b4203d5e1d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 25 00:00:14.115638 containerd[1719]: time="2026-04-25T00:00:14.115540894Z" level=info msg="CreateContainer within sandbox \"ca211c4265e1c0f54737a0d64caad06ee39c106b0cda2d7a50faa7b4203d5e1d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"83db3b210a31aca5b0c648a6e9d7160fddc4e69e5cec6e48a38fb798b3cf8faa\"" Apr 25 00:00:14.116224 containerd[1719]: time="2026-04-25T00:00:14.116199303Z" level=info msg="StartContainer for \"83db3b210a31aca5b0c648a6e9d7160fddc4e69e5cec6e48a38fb798b3cf8faa\"" Apr 25 00:00:14.148550 systemd[1]: Started cri-containerd-83db3b210a31aca5b0c648a6e9d7160fddc4e69e5cec6e48a38fb798b3cf8faa.scope - libcontainer container 83db3b210a31aca5b0c648a6e9d7160fddc4e69e5cec6e48a38fb798b3cf8faa. Apr 25 00:00:14.197117 containerd[1719]: time="2026-04-25T00:00:14.197004961Z" level=info msg="StartContainer for \"83db3b210a31aca5b0c648a6e9d7160fddc4e69e5cec6e48a38fb798b3cf8faa\" returns successfully" Apr 25 00:00:14.985333 kubelet[3298]: E0425 00:00:14.984925 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:15.114936 kubelet[3298]: I0425 00:00:15.114532 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599bf55556-zxn5c" podStartSLOduration=2.0545341 podStartE2EDuration="9.113879572s" podCreationTimestamp="2026-04-25 00:00:06 +0000 UTC" firstStartedPulling="2026-04-25 00:00:06.992142483 +0000 UTC m=+18.105732061" lastFinishedPulling="2026-04-25 00:00:14.051487955 +0000 UTC m=+25.165077533" observedRunningTime="2026-04-25 00:00:15.113250564 +0000 UTC m=+26.226840242" watchObservedRunningTime="2026-04-25 00:00:15.113879572 +0000 UTC m=+26.227469150" Apr 25 00:00:15.167807 kubelet[3298]: E0425 00:00:15.167768 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.167807 kubelet[3298]: W0425 00:00:15.167797 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.168080 kubelet[3298]: E0425 00:00:15.167823 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.168176 kubelet[3298]: E0425 00:00:15.168155 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.168258 kubelet[3298]: W0425 00:00:15.168173 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.168258 kubelet[3298]: E0425 00:00:15.168208 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.168678 kubelet[3298]: E0425 00:00:15.168495 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.168678 kubelet[3298]: W0425 00:00:15.168523 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.168678 kubelet[3298]: E0425 00:00:15.168537 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.168886 kubelet[3298]: E0425 00:00:15.168838 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.168886 kubelet[3298]: W0425 00:00:15.168876 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.168981 kubelet[3298]: E0425 00:00:15.168891 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.169217 kubelet[3298]: E0425 00:00:15.169193 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.169217 kubelet[3298]: W0425 00:00:15.169211 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.169377 kubelet[3298]: E0425 00:00:15.169234 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.169534 kubelet[3298]: E0425 00:00:15.169487 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.169534 kubelet[3298]: W0425 00:00:15.169513 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.169534 kubelet[3298]: E0425 00:00:15.169526 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.169782 kubelet[3298]: E0425 00:00:15.169765 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.169782 kubelet[3298]: W0425 00:00:15.169780 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.169950 kubelet[3298]: E0425 00:00:15.169794 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.170046 kubelet[3298]: E0425 00:00:15.169996 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.170046 kubelet[3298]: W0425 00:00:15.170007 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.170046 kubelet[3298]: E0425 00:00:15.170020 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.170380 kubelet[3298]: E0425 00:00:15.170237 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.170380 kubelet[3298]: W0425 00:00:15.170249 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.170380 kubelet[3298]: E0425 00:00:15.170262 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.170617 kubelet[3298]: E0425 00:00:15.170462 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.170617 kubelet[3298]: W0425 00:00:15.170472 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.170617 kubelet[3298]: E0425 00:00:15.170485 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.170885 kubelet[3298]: E0425 00:00:15.170689 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.170885 kubelet[3298]: W0425 00:00:15.170699 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.170885 kubelet[3298]: E0425 00:00:15.170710 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.171131 kubelet[3298]: E0425 00:00:15.170902 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.171131 kubelet[3298]: W0425 00:00:15.170912 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.171131 kubelet[3298]: E0425 00:00:15.170923 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.171131 kubelet[3298]: E0425 00:00:15.171110 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.171131 kubelet[3298]: W0425 00:00:15.171121 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.171565 kubelet[3298]: E0425 00:00:15.171136 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.171565 kubelet[3298]: E0425 00:00:15.171319 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.171565 kubelet[3298]: W0425 00:00:15.171329 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.171565 kubelet[3298]: E0425 00:00:15.171341 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.171565 kubelet[3298]: E0425 00:00:15.171539 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.171565 kubelet[3298]: W0425 00:00:15.171550 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.171565 kubelet[3298]: E0425 00:00:15.171561 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.175864 kubelet[3298]: E0425 00:00:15.175844 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.175864 kubelet[3298]: W0425 00:00:15.175861 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.175997 kubelet[3298]: E0425 00:00:15.175877 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.176172 kubelet[3298]: E0425 00:00:15.176155 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.176172 kubelet[3298]: W0425 00:00:15.176170 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.176287 kubelet[3298]: E0425 00:00:15.176184 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.176478 kubelet[3298]: E0425 00:00:15.176458 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.176478 kubelet[3298]: W0425 00:00:15.176474 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.176598 kubelet[3298]: E0425 00:00:15.176489 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.176774 kubelet[3298]: E0425 00:00:15.176756 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.176774 kubelet[3298]: W0425 00:00:15.176771 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.176774 kubelet[3298]: E0425 00:00:15.176787 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.177060 kubelet[3298]: E0425 00:00:15.177039 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.177060 kubelet[3298]: W0425 00:00:15.177058 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.177162 kubelet[3298]: E0425 00:00:15.177072 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.177320 kubelet[3298]: E0425 00:00:15.177301 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.177413 kubelet[3298]: W0425 00:00:15.177336 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.177413 kubelet[3298]: E0425 00:00:15.177352 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.177817 kubelet[3298]: E0425 00:00:15.177657 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.177817 kubelet[3298]: W0425 00:00:15.177679 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.177817 kubelet[3298]: E0425 00:00:15.177691 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.178051 kubelet[3298]: E0425 00:00:15.178033 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.178051 kubelet[3298]: W0425 00:00:15.178048 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.178162 kubelet[3298]: E0425 00:00:15.178064 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.178347 kubelet[3298]: E0425 00:00:15.178329 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.178347 kubelet[3298]: W0425 00:00:15.178346 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.178503 kubelet[3298]: E0425 00:00:15.178361 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.179014 kubelet[3298]: E0425 00:00:15.178876 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.179014 kubelet[3298]: W0425 00:00:15.178894 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.179014 kubelet[3298]: E0425 00:00:15.178908 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.179434 kubelet[3298]: E0425 00:00:15.179166 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.179434 kubelet[3298]: W0425 00:00:15.179175 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.179434 kubelet[3298]: E0425 00:00:15.179185 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.179613 kubelet[3298]: E0425 00:00:15.179591 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.179687 kubelet[3298]: W0425 00:00:15.179624 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.179687 kubelet[3298]: E0425 00:00:15.179640 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.180059 kubelet[3298]: E0425 00:00:15.180039 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.180059 kubelet[3298]: W0425 00:00:15.180055 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.180204 kubelet[3298]: E0425 00:00:15.180069 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.180328 kubelet[3298]: E0425 00:00:15.180312 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.180328 kubelet[3298]: W0425 00:00:15.180326 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.180451 kubelet[3298]: E0425 00:00:15.180340 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.180618 kubelet[3298]: E0425 00:00:15.180600 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.180618 kubelet[3298]: W0425 00:00:15.180616 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.180772 kubelet[3298]: E0425 00:00:15.180631 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.180870 kubelet[3298]: E0425 00:00:15.180850 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.180870 kubelet[3298]: W0425 00:00:15.180866 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.180964 kubelet[3298]: E0425 00:00:15.180881 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.181119 kubelet[3298]: E0425 00:00:15.181101 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.181119 kubelet[3298]: W0425 00:00:15.181117 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.181243 kubelet[3298]: E0425 00:00:15.181131 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:15.181603 kubelet[3298]: E0425 00:00:15.181585 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:15.181603 kubelet[3298]: W0425 00:00:15.181600 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:15.181695 kubelet[3298]: E0425 00:00:15.181614 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.100731 kubelet[3298]: I0425 00:00:16.100694 3298 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:00:16.179967 kubelet[3298]: E0425 00:00:16.179937 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.179967 kubelet[3298]: W0425 00:00:16.179959 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.180172 kubelet[3298]: E0425 00:00:16.179984 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.180259 kubelet[3298]: E0425 00:00:16.180231 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.180259 kubelet[3298]: W0425 00:00:16.180243 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.180381 kubelet[3298]: E0425 00:00:16.180258 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.180522 kubelet[3298]: E0425 00:00:16.180502 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.180522 kubelet[3298]: W0425 00:00:16.180518 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.180632 kubelet[3298]: E0425 00:00:16.180532 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.180753 kubelet[3298]: E0425 00:00:16.180737 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.180753 kubelet[3298]: W0425 00:00:16.180751 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.180870 kubelet[3298]: E0425 00:00:16.180764 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.180989 kubelet[3298]: E0425 00:00:16.180973 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.181053 kubelet[3298]: W0425 00:00:16.180987 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.181053 kubelet[3298]: E0425 00:00:16.181001 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.181222 kubelet[3298]: E0425 00:00:16.181207 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.181222 kubelet[3298]: W0425 00:00:16.181220 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.181315 kubelet[3298]: E0425 00:00:16.181240 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.181447 kubelet[3298]: E0425 00:00:16.181431 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.181447 kubelet[3298]: W0425 00:00:16.181443 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.181585 kubelet[3298]: E0425 00:00:16.181455 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.181673 kubelet[3298]: E0425 00:00:16.181654 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.181673 kubelet[3298]: W0425 00:00:16.181670 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.181786 kubelet[3298]: E0425 00:00:16.181683 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.181926 kubelet[3298]: E0425 00:00:16.181910 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.181926 kubelet[3298]: W0425 00:00:16.181924 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.182044 kubelet[3298]: E0425 00:00:16.181937 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.182143 kubelet[3298]: E0425 00:00:16.182126 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.182143 kubelet[3298]: W0425 00:00:16.182141 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.182266 kubelet[3298]: E0425 00:00:16.182153 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.182367 kubelet[3298]: E0425 00:00:16.182349 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.182367 kubelet[3298]: W0425 00:00:16.182366 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.182512 kubelet[3298]: E0425 00:00:16.182378 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.182604 kubelet[3298]: E0425 00:00:16.182586 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.182604 kubelet[3298]: W0425 00:00:16.182601 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.182697 kubelet[3298]: E0425 00:00:16.182614 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.182838 kubelet[3298]: E0425 00:00:16.182811 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.182838 kubelet[3298]: W0425 00:00:16.182835 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.182945 kubelet[3298]: E0425 00:00:16.182848 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.183044 kubelet[3298]: E0425 00:00:16.183030 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.183044 kubelet[3298]: W0425 00:00:16.183042 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.183149 kubelet[3298]: E0425 00:00:16.183054 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.183267 kubelet[3298]: E0425 00:00:16.183251 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.183267 kubelet[3298]: W0425 00:00:16.183265 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.183351 kubelet[3298]: E0425 00:00:16.183277 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.184588 kubelet[3298]: E0425 00:00:16.184565 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.184679 kubelet[3298]: W0425 00:00:16.184582 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.184679 kubelet[3298]: E0425 00:00:16.184605 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.184902 kubelet[3298]: E0425 00:00:16.184879 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.184902 kubelet[3298]: W0425 00:00:16.184895 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.185022 kubelet[3298]: E0425 00:00:16.184911 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.185187 kubelet[3298]: E0425 00:00:16.185170 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.185187 kubelet[3298]: W0425 00:00:16.185185 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.185304 kubelet[3298]: E0425 00:00:16.185198 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.185552 kubelet[3298]: E0425 00:00:16.185483 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.185552 kubelet[3298]: W0425 00:00:16.185494 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.185552 kubelet[3298]: E0425 00:00:16.185548 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.185815 kubelet[3298]: E0425 00:00:16.185795 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.185815 kubelet[3298]: W0425 00:00:16.185813 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.185931 kubelet[3298]: E0425 00:00:16.185827 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.186095 kubelet[3298]: E0425 00:00:16.186066 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.186095 kubelet[3298]: W0425 00:00:16.186080 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.186095 kubelet[3298]: E0425 00:00:16.186094 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.186391 kubelet[3298]: E0425 00:00:16.186371 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.186391 kubelet[3298]: W0425 00:00:16.186386 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.186532 kubelet[3298]: E0425 00:00:16.186414 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.186719 kubelet[3298]: E0425 00:00:16.186700 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.186719 kubelet[3298]: W0425 00:00:16.186717 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.186838 kubelet[3298]: E0425 00:00:16.186732 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.187010 kubelet[3298]: E0425 00:00:16.186991 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.187010 kubelet[3298]: W0425 00:00:16.187008 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.187115 kubelet[3298]: E0425 00:00:16.187022 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.187460 kubelet[3298]: E0425 00:00:16.187398 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.187460 kubelet[3298]: W0425 00:00:16.187428 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.187460 kubelet[3298]: E0425 00:00:16.187441 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.187702 kubelet[3298]: E0425 00:00:16.187685 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.187702 kubelet[3298]: W0425 00:00:16.187701 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.187810 kubelet[3298]: E0425 00:00:16.187715 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.187956 kubelet[3298]: E0425 00:00:16.187936 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.187956 kubelet[3298]: W0425 00:00:16.187953 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.188075 kubelet[3298]: E0425 00:00:16.187967 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.188237 kubelet[3298]: E0425 00:00:16.188219 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.188237 kubelet[3298]: W0425 00:00:16.188234 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.188349 kubelet[3298]: E0425 00:00:16.188250 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.188594 kubelet[3298]: E0425 00:00:16.188579 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.188693 kubelet[3298]: W0425 00:00:16.188672 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.188803 kubelet[3298]: E0425 00:00:16.188695 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.188927 kubelet[3298]: E0425 00:00:16.188911 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.188927 kubelet[3298]: W0425 00:00:16.188926 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.189040 kubelet[3298]: E0425 00:00:16.188939 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.189206 kubelet[3298]: E0425 00:00:16.189189 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.189206 kubelet[3298]: W0425 00:00:16.189204 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.189335 kubelet[3298]: E0425 00:00:16.189218 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.189787 kubelet[3298]: E0425 00:00:16.189765 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.189787 kubelet[3298]: W0425 00:00:16.189782 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.189904 kubelet[3298]: E0425 00:00:16.189855 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.190114 kubelet[3298]: E0425 00:00:16.190097 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:16.190114 kubelet[3298]: W0425 00:00:16.190112 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:16.190198 kubelet[3298]: E0425 00:00:16.190125 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:16.984798 kubelet[3298]: E0425 00:00:16.984239 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:18.985904 kubelet[3298]: E0425 00:00:18.984772 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:20.984513 kubelet[3298]: E0425 00:00:20.984381 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:22.984465 kubelet[3298]: E0425 00:00:22.984038 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:24.984838 kubelet[3298]: E0425 00:00:24.984791 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:26.985455 kubelet[3298]: E0425 00:00:26.984219 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:28.986372 kubelet[3298]: E0425 00:00:28.985474 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:30.874259 kubelet[3298]: I0425 00:00:30.873781 3298 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 25 00:00:30.882689 kubelet[3298]: E0425 00:00:30.882510 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.882689 kubelet[3298]: W0425 00:00:30.882555 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.882689 kubelet[3298]: E0425 00:00:30.882582 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.883009 kubelet[3298]: E0425 00:00:30.882936 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.883009 kubelet[3298]: W0425 00:00:30.882951 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.883009 kubelet[3298]: E0425 00:00:30.882967 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.883284 kubelet[3298]: E0425 00:00:30.883218 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.883284 kubelet[3298]: W0425 00:00:30.883230 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.883284 kubelet[3298]: E0425 00:00:30.883258 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.883598 kubelet[3298]: E0425 00:00:30.883574 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.883598 kubelet[3298]: W0425 00:00:30.883588 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.883773 kubelet[3298]: E0425 00:00:30.883604 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.883862 kubelet[3298]: E0425 00:00:30.883843 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.883862 kubelet[3298]: W0425 00:00:30.883856 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.883974 kubelet[3298]: E0425 00:00:30.883869 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.884087 kubelet[3298]: E0425 00:00:30.884069 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.884087 kubelet[3298]: W0425 00:00:30.884085 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.884239 kubelet[3298]: E0425 00:00:30.884099 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.884310 kubelet[3298]: E0425 00:00:30.884296 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.884310 kubelet[3298]: W0425 00:00:30.884307 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.884465 kubelet[3298]: E0425 00:00:30.884320 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.884543 kubelet[3298]: E0425 00:00:30.884529 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.884543 kubelet[3298]: W0425 00:00:30.884539 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.884698 kubelet[3298]: E0425 00:00:30.884552 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.884791 kubelet[3298]: E0425 00:00:30.884774 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.884791 kubelet[3298]: W0425 00:00:30.884788 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.884939 kubelet[3298]: E0425 00:00:30.884801 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.885020 kubelet[3298]: E0425 00:00:30.884992 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.885020 kubelet[3298]: W0425 00:00:30.885003 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.885020 kubelet[3298]: E0425 00:00:30.885015 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.885256 kubelet[3298]: E0425 00:00:30.885235 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.885256 kubelet[3298]: W0425 00:00:30.885252 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.885431 kubelet[3298]: E0425 00:00:30.885266 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.885524 kubelet[3298]: E0425 00:00:30.885492 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.885524 kubelet[3298]: W0425 00:00:30.885503 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.885524 kubelet[3298]: E0425 00:00:30.885515 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.885777 kubelet[3298]: E0425 00:00:30.885725 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.885777 kubelet[3298]: W0425 00:00:30.885747 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.885777 kubelet[3298]: E0425 00:00:30.885760 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.885994 kubelet[3298]: E0425 00:00:30.885960 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.885994 kubelet[3298]: W0425 00:00:30.885970 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.885994 kubelet[3298]: E0425 00:00:30.885982 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.886228 kubelet[3298]: E0425 00:00:30.886184 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.886228 kubelet[3298]: W0425 00:00:30.886193 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.886228 kubelet[3298]: E0425 00:00:30.886205 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.886544 kubelet[3298]: E0425 00:00:30.886525 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.886544 kubelet[3298]: W0425 00:00:30.886541 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.886677 kubelet[3298]: E0425 00:00:30.886555 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.886857 kubelet[3298]: E0425 00:00:30.886838 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.886924 kubelet[3298]: W0425 00:00:30.886860 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.886924 kubelet[3298]: E0425 00:00:30.886875 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.887161 kubelet[3298]: E0425 00:00:30.887139 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.887161 kubelet[3298]: W0425 00:00:30.887154 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.887278 kubelet[3298]: E0425 00:00:30.887168 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.887494 kubelet[3298]: E0425 00:00:30.887476 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.887494 kubelet[3298]: W0425 00:00:30.887491 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.887609 kubelet[3298]: E0425 00:00:30.887504 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.887763 kubelet[3298]: E0425 00:00:30.887740 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.887763 kubelet[3298]: W0425 00:00:30.887756 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.887896 kubelet[3298]: E0425 00:00:30.887769 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.888003 kubelet[3298]: E0425 00:00:30.887988 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.888003 kubelet[3298]: W0425 00:00:30.888002 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.888152 kubelet[3298]: E0425 00:00:30.888014 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.888255 kubelet[3298]: E0425 00:00:30.888240 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.888321 kubelet[3298]: W0425 00:00:30.888254 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.888321 kubelet[3298]: E0425 00:00:30.888283 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.888618 kubelet[3298]: E0425 00:00:30.888520 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.888618 kubelet[3298]: W0425 00:00:30.888532 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.888618 kubelet[3298]: E0425 00:00:30.888545 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.888795 kubelet[3298]: E0425 00:00:30.888778 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.888795 kubelet[3298]: W0425 00:00:30.888789 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.888873 kubelet[3298]: E0425 00:00:30.888802 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.889157 kubelet[3298]: E0425 00:00:30.889138 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.889157 kubelet[3298]: W0425 00:00:30.889153 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.889273 kubelet[3298]: E0425 00:00:30.889167 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.890555 kubelet[3298]: E0425 00:00:30.890535 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.890555 kubelet[3298]: W0425 00:00:30.890551 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.890683 kubelet[3298]: E0425 00:00:30.890566 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.890955 kubelet[3298]: E0425 00:00:30.890832 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.890955 kubelet[3298]: W0425 00:00:30.890847 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.890955 kubelet[3298]: E0425 00:00:30.890861 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.892523 kubelet[3298]: E0425 00:00:30.891672 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.892523 kubelet[3298]: W0425 00:00:30.891686 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.892523 kubelet[3298]: E0425 00:00:30.891699 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.892523 kubelet[3298]: E0425 00:00:30.892167 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.892523 kubelet[3298]: W0425 00:00:30.892181 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.892523 kubelet[3298]: E0425 00:00:30.892198 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.892790 kubelet[3298]: E0425 00:00:30.892646 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.892790 kubelet[3298]: W0425 00:00:30.892659 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.892790 kubelet[3298]: E0425 00:00:30.892673 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.894055 kubelet[3298]: E0425 00:00:30.893597 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.894055 kubelet[3298]: W0425 00:00:30.893612 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.894055 kubelet[3298]: E0425 00:00:30.893626 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.894055 kubelet[3298]: E0425 00:00:30.893837 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.894055 kubelet[3298]: W0425 00:00:30.893848 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.894055 kubelet[3298]: E0425 00:00:30.893860 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.894352 kubelet[3298]: E0425 00:00:30.894294 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:30.894352 kubelet[3298]: W0425 00:00:30.894308 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:30.894352 kubelet[3298]: E0425 00:00:30.894322 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:30.985290 kubelet[3298]: E0425 00:00:30.984148 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:31.188207 kubelet[3298]: E0425 00:00:31.188180 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.188207 kubelet[3298]: W0425 00:00:31.188201 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.188457 kubelet[3298]: E0425 00:00:31.188226 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.188522 kubelet[3298]: E0425 00:00:31.188505 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.188522 kubelet[3298]: W0425 00:00:31.188517 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.188601 kubelet[3298]: E0425 00:00:31.188536 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.188811 kubelet[3298]: E0425 00:00:31.188793 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.188811 kubelet[3298]: W0425 00:00:31.188806 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.188961 kubelet[3298]: E0425 00:00:31.188822 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.189071 kubelet[3298]: E0425 00:00:31.189047 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.189071 kubelet[3298]: W0425 00:00:31.189060 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.189167 kubelet[3298]: E0425 00:00:31.189073 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.190260 kubelet[3298]: E0425 00:00:31.189425 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.190260 kubelet[3298]: W0425 00:00:31.189442 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.190260 kubelet[3298]: E0425 00:00:31.189457 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.190260 kubelet[3298]: E0425 00:00:31.189690 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.190260 kubelet[3298]: W0425 00:00:31.189700 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.190260 kubelet[3298]: E0425 00:00:31.189713 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.190260 kubelet[3298]: E0425 00:00:31.189925 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.190260 kubelet[3298]: W0425 00:00:31.189935 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.190260 kubelet[3298]: E0425 00:00:31.189947 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.190260 kubelet[3298]: E0425 00:00:31.190175 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.191218 kubelet[3298]: W0425 00:00:31.190185 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.191218 kubelet[3298]: E0425 00:00:31.190200 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.191218 kubelet[3298]: E0425 00:00:31.190527 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.191218 kubelet[3298]: W0425 00:00:31.190539 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.191218 kubelet[3298]: E0425 00:00:31.190551 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.191218 kubelet[3298]: E0425 00:00:31.190820 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.191218 kubelet[3298]: W0425 00:00:31.190832 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.191218 kubelet[3298]: E0425 00:00:31.190859 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.191218 kubelet[3298]: E0425 00:00:31.191081 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.191218 kubelet[3298]: W0425 00:00:31.191100 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.191672 kubelet[3298]: E0425 00:00:31.191132 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.191672 kubelet[3298]: E0425 00:00:31.191365 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.191672 kubelet[3298]: W0425 00:00:31.191375 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.191672 kubelet[3298]: E0425 00:00:31.191387 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.191837 kubelet[3298]: E0425 00:00:31.191694 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.191837 kubelet[3298]: W0425 00:00:31.191708 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.191837 kubelet[3298]: E0425 00:00:31.191734 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.191971 kubelet[3298]: E0425 00:00:31.191956 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.192014 kubelet[3298]: W0425 00:00:31.191966 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.192014 kubelet[3298]: E0425 00:00:31.191993 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.192357 kubelet[3298]: E0425 00:00:31.192201 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.192357 kubelet[3298]: W0425 00:00:31.192223 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.192357 kubelet[3298]: E0425 00:00:31.192236 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.192614 kubelet[3298]: E0425 00:00:31.192574 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.192614 kubelet[3298]: W0425 00:00:31.192586 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.192614 kubelet[3298]: E0425 00:00:31.192599 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.192873 kubelet[3298]: E0425 00:00:31.192855 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.192873 kubelet[3298]: W0425 00:00:31.192870 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.192985 kubelet[3298]: E0425 00:00:31.192884 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.193178 kubelet[3298]: E0425 00:00:31.193160 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.193178 kubelet[3298]: W0425 00:00:31.193174 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.193292 kubelet[3298]: E0425 00:00:31.193189 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.193541 kubelet[3298]: E0425 00:00:31.193520 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.193541 kubelet[3298]: W0425 00:00:31.193538 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.193667 kubelet[3298]: E0425 00:00:31.193553 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.193790 kubelet[3298]: E0425 00:00:31.193771 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.193790 kubelet[3298]: W0425 00:00:31.193786 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.193885 kubelet[3298]: E0425 00:00:31.193799 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.194063 kubelet[3298]: E0425 00:00:31.194047 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.194063 kubelet[3298]: W0425 00:00:31.194061 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.194166 kubelet[3298]: E0425 00:00:31.194075 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.194344 kubelet[3298]: E0425 00:00:31.194327 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.194344 kubelet[3298]: W0425 00:00:31.194343 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.194482 kubelet[3298]: E0425 00:00:31.194358 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.194623 kubelet[3298]: E0425 00:00:31.194605 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.194623 kubelet[3298]: W0425 00:00:31.194620 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.194737 kubelet[3298]: E0425 00:00:31.194635 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.194910 kubelet[3298]: E0425 00:00:31.194893 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.194910 kubelet[3298]: W0425 00:00:31.194908 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.195027 kubelet[3298]: E0425 00:00:31.194922 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.195472 kubelet[3298]: E0425 00:00:31.195339 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.195472 kubelet[3298]: W0425 00:00:31.195353 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.195472 kubelet[3298]: E0425 00:00:31.195365 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.195647 kubelet[3298]: E0425 00:00:31.195602 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.195647 kubelet[3298]: W0425 00:00:31.195613 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.195647 kubelet[3298]: E0425 00:00:31.195626 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.195884 kubelet[3298]: E0425 00:00:31.195866 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.195884 kubelet[3298]: W0425 00:00:31.195881 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.195998 kubelet[3298]: E0425 00:00:31.195895 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.196268 kubelet[3298]: E0425 00:00:31.196250 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.196268 kubelet[3298]: W0425 00:00:31.196265 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.196389 kubelet[3298]: E0425 00:00:31.196279 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.196574 kubelet[3298]: E0425 00:00:31.196541 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.196574 kubelet[3298]: W0425 00:00:31.196572 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.196683 kubelet[3298]: E0425 00:00:31.196588 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.196851 kubelet[3298]: E0425 00:00:31.196834 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.196851 kubelet[3298]: W0425 00:00:31.196848 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.196960 kubelet[3298]: E0425 00:00:31.196863 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.197112 kubelet[3298]: E0425 00:00:31.197095 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.197112 kubelet[3298]: W0425 00:00:31.197110 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.197212 kubelet[3298]: E0425 00:00:31.197125 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.197371 kubelet[3298]: E0425 00:00:31.197353 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.197371 kubelet[3298]: W0425 00:00:31.197368 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.197522 kubelet[3298]: E0425 00:00:31.197383 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:31.197860 kubelet[3298]: E0425 00:00:31.197842 3298 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 00:00:31.197860 kubelet[3298]: W0425 00:00:31.197857 3298 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 00:00:31.197975 kubelet[3298]: E0425 00:00:31.197903 3298 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 00:00:32.984867 kubelet[3298]: E0425 00:00:32.984666 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:33.153124 containerd[1719]: time="2026-04-25T00:00:33.153072530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:33.155778 containerd[1719]: time="2026-04-25T00:00:33.155631564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 25 00:00:33.159873 containerd[1719]: time="2026-04-25T00:00:33.159640216Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:33.163679 containerd[1719]: time="2026-04-25T00:00:33.163527467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:33.164972 containerd[1719]: time="2026-04-25T00:00:33.164543481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 19.111490705s" Apr 25 00:00:33.164972 containerd[1719]: time="2026-04-25T00:00:33.164600881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 25 00:00:33.171516 containerd[1719]: time="2026-04-25T00:00:33.171487472Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 25 00:00:33.210950 containerd[1719]: time="2026-04-25T00:00:33.210907690Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92\"" Apr 25 00:00:33.212910 containerd[1719]: time="2026-04-25T00:00:33.211576199Z" level=info msg="StartContainer for \"db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92\"" Apr 25 00:00:33.242576 systemd[1]: Started cri-containerd-db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92.scope - libcontainer container db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92. Apr 25 00:00:33.269658 containerd[1719]: time="2026-04-25T00:00:33.269540061Z" level=info msg="StartContainer for \"db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92\" returns successfully" Apr 25 00:00:33.277395 systemd[1]: cri-containerd-db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92.scope: Deactivated successfully. Apr 25 00:00:33.300265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92-rootfs.mount: Deactivated successfully. Apr 25 00:00:34.071886 containerd[1719]: time="2026-04-25T00:00:34.071806806Z" level=info msg="shim disconnected" id=db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92 namespace=k8s.io Apr 25 00:00:34.071886 containerd[1719]: time="2026-04-25T00:00:34.071880007Z" level=warning msg="cleaning up after shim disconnected" id=db5040e444a802010344959f09d65b289e87ada59abca0aa637c05ac73aefb92 namespace=k8s.io Apr 25 00:00:34.072431 containerd[1719]: time="2026-04-25T00:00:34.071895007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:00:34.140249 containerd[1719]: time="2026-04-25T00:00:34.140181905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 25 00:00:34.984535 kubelet[3298]: E0425 00:00:34.984023 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:36.985170 kubelet[3298]: E0425 00:00:36.984772 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:38.985887 kubelet[3298]: E0425 00:00:38.985787 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:40.985434 kubelet[3298]: E0425 00:00:40.984600 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:42.986320 kubelet[3298]: E0425 00:00:42.984689 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:44.985156 kubelet[3298]: E0425 00:00:44.984753 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:46.990593 kubelet[3298]: E0425 00:00:46.990140 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:48.985890 kubelet[3298]: E0425 00:00:48.985644 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:50.025869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328569568.mount: Deactivated successfully. Apr 25 00:00:50.062779 containerd[1719]: time="2026-04-25T00:00:50.062727507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:50.065263 containerd[1719]: time="2026-04-25T00:00:50.065211639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 25 00:00:50.068303 containerd[1719]: time="2026-04-25T00:00:50.068140178Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:50.072952 containerd[1719]: time="2026-04-25T00:00:50.072808038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:50.074114 containerd[1719]: time="2026-04-25T00:00:50.073900852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 15.933440344s" Apr 25 00:00:50.074114 containerd[1719]: time="2026-04-25T00:00:50.073943053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 25 00:00:50.081846 containerd[1719]: time="2026-04-25T00:00:50.081812855Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 25 00:00:50.127747 containerd[1719]: time="2026-04-25T00:00:50.127652650Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7\"" Apr 25 00:00:50.128622 containerd[1719]: time="2026-04-25T00:00:50.128461861Z" level=info msg="StartContainer for \"fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7\"" Apr 25 00:00:50.160890 systemd[1]: run-containerd-runc-k8s.io-fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7-runc.wBJNrU.mount: Deactivated successfully. Apr 25 00:00:50.167575 systemd[1]: Started cri-containerd-fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7.scope - libcontainer container fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7. Apr 25 00:00:50.205914 containerd[1719]: time="2026-04-25T00:00:50.205800465Z" level=info msg="StartContainer for \"fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7\" returns successfully" Apr 25 00:00:50.238346 systemd[1]: cri-containerd-fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7.scope: Deactivated successfully. Apr 25 00:00:50.986161 kubelet[3298]: E0425 00:00:50.984961 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:51.023756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7-rootfs.mount: Deactivated successfully. Apr 25 00:00:52.986179 kubelet[3298]: E0425 00:00:52.984556 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:54.985579 kubelet[3298]: E0425 00:00:54.984920 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:56.366734 containerd[1719]: time="2026-04-25T00:00:56.366652138Z" level=info msg="shim disconnected" id=fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7 namespace=k8s.io Apr 25 00:00:56.366734 containerd[1719]: time="2026-04-25T00:00:56.366732639Z" level=warning msg="cleaning up after shim disconnected" id=fd772af33945db1dded704d75d1b073e997bc391c78b0f17524140527ed23bb7 namespace=k8s.io Apr 25 00:00:56.367475 containerd[1719]: time="2026-04-25T00:00:56.366761540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:00:56.984436 kubelet[3298]: E0425 00:00:56.984143 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:00:57.198444 containerd[1719]: time="2026-04-25T00:00:57.198065211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 25 00:00:58.985737 kubelet[3298]: E0425 00:00:58.985240 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:00.984301 kubelet[3298]: E0425 00:01:00.984099 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:02.984978 kubelet[3298]: E0425 00:01:02.984767 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:04.984803 kubelet[3298]: E0425 00:01:04.984578 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:06.988532 kubelet[3298]: E0425 00:01:06.987250 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:07.661722 containerd[1719]: time="2026-04-25T00:01:07.661619202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:07.723938 containerd[1719]: time="2026-04-25T00:01:07.723854453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 25 00:01:07.727314 containerd[1719]: time="2026-04-25T00:01:07.727253894Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:07.771738 containerd[1719]: time="2026-04-25T00:01:07.771674531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:07.772756 containerd[1719]: time="2026-04-25T00:01:07.772625942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 10.57449713s" Apr 25 00:01:07.772756 containerd[1719]: time="2026-04-25T00:01:07.772665543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 25 00:01:07.820980 containerd[1719]: time="2026-04-25T00:01:07.820945526Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 25 00:01:08.122695 containerd[1719]: time="2026-04-25T00:01:08.122571999Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf\"" Apr 25 00:01:08.124461 containerd[1719]: time="2026-04-25T00:01:08.123302808Z" level=info msg="StartContainer for \"485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf\"" Apr 25 00:01:08.155075 systemd[1]: run-containerd-runc-k8s.io-485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf-runc.MDQeYP.mount: Deactivated successfully. Apr 25 00:01:08.160903 systemd[1]: Started cri-containerd-485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf.scope - libcontainer container 485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf. Apr 25 00:01:08.191199 containerd[1719]: time="2026-04-25T00:01:08.191160183Z" level=info msg="StartContainer for \"485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf\" returns successfully" Apr 25 00:01:08.985739 kubelet[3298]: E0425 00:01:08.985466 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:10.985081 kubelet[3298]: E0425 00:01:10.984853 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:12.880354 containerd[1719]: time="2026-04-25T00:01:12.880286139Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 00:01:12.882533 systemd[1]: cri-containerd-485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf.scope: Deactivated successfully. Apr 25 00:01:12.908480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf-rootfs.mount: Deactivated successfully. Apr 25 00:01:12.923369 kubelet[3298]: I0425 00:01:12.923338 3298 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 25 00:01:18.128119 systemd[1]: Created slice kubepods-burstable-pod7f66e989_db5b_4adc_97a4_8927e62e5b7f.slice - libcontainer container kubepods-burstable-pod7f66e989_db5b_4adc_97a4_8927e62e5b7f.slice. Apr 25 00:01:18.134480 systemd[1]: Created slice kubepods-besteffort-podd30beaa3_b48e_4165_93c6_fa00a976739a.slice - libcontainer container kubepods-besteffort-podd30beaa3_b48e_4165_93c6_fa00a976739a.slice. Apr 25 00:01:18.247939 kubelet[3298]: I0425 00:01:18.247817 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f66e989-db5b-4adc-97a4-8927e62e5b7f-config-volume\") pod \"coredns-66bc5c9577-x7pfz\" (UID: \"7f66e989-db5b-4adc-97a4-8927e62e5b7f\") " pod="kube-system/coredns-66bc5c9577-x7pfz" Apr 25 00:01:18.247939 kubelet[3298]: I0425 00:01:18.247941 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68782\" (UniqueName: \"kubernetes.io/projected/7f66e989-db5b-4adc-97a4-8927e62e5b7f-kube-api-access-68782\") pod \"coredns-66bc5c9577-x7pfz\" (UID: \"7f66e989-db5b-4adc-97a4-8927e62e5b7f\") " pod="kube-system/coredns-66bc5c9577-x7pfz" Apr 25 00:01:18.259618 containerd[1719]: time="2026-04-25T00:01:18.259524145Z" level=info msg="shim disconnected" id=485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf namespace=k8s.io Apr 25 00:01:18.259618 containerd[1719]: time="2026-04-25T00:01:18.259598846Z" level=warning msg="cleaning up after shim disconnected" id=485d0562897c99cffe819723f597e5ad86a20e25e72da4bb3e83c7e0204ef4cf namespace=k8s.io Apr 25 00:01:18.259618 containerd[1719]: time="2026-04-25T00:01:18.259613246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:18.333198 systemd[1]: Created slice kubepods-burstable-pod475a688c_c33d_44aa_8f1e_2084fa8ba675.slice - libcontainer container kubepods-burstable-pod475a688c_c33d_44aa_8f1e_2084fa8ba675.slice. Apr 25 00:01:18.370538 containerd[1719]: time="2026-04-25T00:01:18.370058168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz7wk,Uid:d30beaa3-b48e-4165-93c6-fa00a976739a,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:18.430596 systemd[1]: Created slice kubepods-besteffort-pod1e5e98eb_3874_4565_91da_8cf70bd88eeb.slice - libcontainer container kubepods-besteffort-pod1e5e98eb_3874_4565_91da_8cf70bd88eeb.slice. Apr 25 00:01:18.450170 kubelet[3298]: I0425 00:01:18.450134 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqqm\" (UniqueName: \"kubernetes.io/projected/475a688c-c33d-44aa-8f1e-2084fa8ba675-kube-api-access-4tqqm\") pod \"coredns-66bc5c9577-xrrd8\" (UID: \"475a688c-c33d-44aa-8f1e-2084fa8ba675\") " pod="kube-system/coredns-66bc5c9577-xrrd8" Apr 25 00:01:18.450315 kubelet[3298]: I0425 00:01:18.450189 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/475a688c-c33d-44aa-8f1e-2084fa8ba675-config-volume\") pod \"coredns-66bc5c9577-xrrd8\" (UID: \"475a688c-c33d-44aa-8f1e-2084fa8ba675\") " pod="kube-system/coredns-66bc5c9577-xrrd8" Apr 25 00:01:18.469217 containerd[1719]: time="2026-04-25T00:01:18.469171743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x7pfz,Uid:7f66e989-db5b-4adc-97a4-8927e62e5b7f,Namespace:kube-system,Attempt:0,}" Apr 25 00:01:18.480844 systemd[1]: Created slice kubepods-besteffort-pode1cd18da_eb8a_4963_8091_63c656860ed8.slice - libcontainer container kubepods-besteffort-pode1cd18da_eb8a_4963_8091_63c656860ed8.slice. Apr 25 00:01:18.551367 kubelet[3298]: I0425 00:01:18.551283 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-nginx-config\") pod \"whisker-5689597f97-ghq8j\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " pod="calico-system/whisker-5689597f97-ghq8j" Apr 25 00:01:18.561135 kubelet[3298]: I0425 00:01:18.551469 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e5e98eb-3874-4565-91da-8cf70bd88eeb-tigera-ca-bundle\") pod \"calico-kube-controllers-7bd5cc46d5-99qj9\" (UID: \"1e5e98eb-3874-4565-91da-8cf70bd88eeb\") " pod="calico-system/calico-kube-controllers-7bd5cc46d5-99qj9" Apr 25 00:01:18.561135 kubelet[3298]: I0425 00:01:18.551546 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skwn\" (UniqueName: \"kubernetes.io/projected/e1cd18da-eb8a-4963-8091-63c656860ed8-kube-api-access-4skwn\") pod \"whisker-5689597f97-ghq8j\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " pod="calico-system/whisker-5689597f97-ghq8j" Apr 25 00:01:18.561135 kubelet[3298]: I0425 00:01:18.551644 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9t22\" (UniqueName: \"kubernetes.io/projected/1e5e98eb-3874-4565-91da-8cf70bd88eeb-kube-api-access-h9t22\") pod \"calico-kube-controllers-7bd5cc46d5-99qj9\" (UID: \"1e5e98eb-3874-4565-91da-8cf70bd88eeb\") " pod="calico-system/calico-kube-controllers-7bd5cc46d5-99qj9" Apr 25 00:01:18.561135 kubelet[3298]: I0425 00:01:18.551716 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-backend-key-pair\") pod \"whisker-5689597f97-ghq8j\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " pod="calico-system/whisker-5689597f97-ghq8j" Apr 25 00:01:18.561135 kubelet[3298]: I0425 00:01:18.551843 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-ca-bundle\") pod \"whisker-5689597f97-ghq8j\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " pod="calico-system/whisker-5689597f97-ghq8j" Apr 25 00:01:18.580372 systemd[1]: Created slice kubepods-besteffort-pod6be2f8f5_c613_46db_be5c_7773acd8f189.slice - libcontainer container kubepods-besteffort-pod6be2f8f5_c613_46db_be5c_7773acd8f189.slice. Apr 25 00:01:18.653059 kubelet[3298]: I0425 00:01:18.652901 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6be2f8f5-c613-46db-be5c-7773acd8f189-calico-apiserver-certs\") pod \"calico-apiserver-75c69f45f-gpm6f\" (UID: \"6be2f8f5-c613-46db-be5c-7773acd8f189\") " pod="calico-system/calico-apiserver-75c69f45f-gpm6f" Apr 25 00:01:18.653059 kubelet[3298]: I0425 00:01:18.652971 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cfzk\" (UniqueName: \"kubernetes.io/projected/6be2f8f5-c613-46db-be5c-7773acd8f189-kube-api-access-6cfzk\") pod \"calico-apiserver-75c69f45f-gpm6f\" (UID: \"6be2f8f5-c613-46db-be5c-7773acd8f189\") " pod="calico-system/calico-apiserver-75c69f45f-gpm6f" Apr 25 00:01:18.706482 systemd[1]: Created slice kubepods-besteffort-pod6f9f367b_3d37_4263_8e44_7b68c95eb032.slice - libcontainer container kubepods-besteffort-pod6f9f367b_3d37_4263_8e44_7b68c95eb032.slice. Apr 25 00:01:18.724520 containerd[1719]: time="2026-04-25T00:01:18.724359728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrrd8,Uid:475a688c-c33d-44aa-8f1e-2084fa8ba675,Namespace:kube-system,Attempt:0,}" Apr 25 00:01:18.753933 kubelet[3298]: I0425 00:01:18.753892 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f9f367b-3d37-4263-8e44-7b68c95eb032-calico-apiserver-certs\") pod \"calico-apiserver-75c69f45f-dgnpd\" (UID: \"6f9f367b-3d37-4263-8e44-7b68c95eb032\") " pod="calico-system/calico-apiserver-75c69f45f-dgnpd" Apr 25 00:01:18.753933 kubelet[3298]: I0425 00:01:18.753930 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtl5\" (UniqueName: \"kubernetes.io/projected/6f9f367b-3d37-4263-8e44-7b68c95eb032-kube-api-access-cqtl5\") pod \"calico-apiserver-75c69f45f-dgnpd\" (UID: \"6f9f367b-3d37-4263-8e44-7b68c95eb032\") " pod="calico-system/calico-apiserver-75c69f45f-dgnpd" Apr 25 00:01:18.911641 containerd[1719]: time="2026-04-25T00:01:18.911164833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd5cc46d5-99qj9,Uid:1e5e98eb-3874-4565-91da-8cf70bd88eeb,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:18.923658 systemd[1]: Created slice kubepods-besteffort-pod8bd4aa88_033e_41a0_bd2f_6042f3bf091b.slice - libcontainer container kubepods-besteffort-pod8bd4aa88_033e_41a0_bd2f_6042f3bf091b.slice. Apr 25 00:01:18.973656 kubelet[3298]: I0425 00:01:18.956060 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bd4aa88-033e-41a0-bd2f-6042f3bf091b-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-7ctl7\" (UID: \"8bd4aa88-033e-41a0-bd2f-6042f3bf091b\") " pod="calico-system/goldmane-cccfbd5cf-7ctl7" Apr 25 00:01:18.973656 kubelet[3298]: I0425 00:01:18.956091 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr4km\" (UniqueName: \"kubernetes.io/projected/8bd4aa88-033e-41a0-bd2f-6042f3bf091b-kube-api-access-zr4km\") pod \"goldmane-cccfbd5cf-7ctl7\" (UID: \"8bd4aa88-033e-41a0-bd2f-6042f3bf091b\") " pod="calico-system/goldmane-cccfbd5cf-7ctl7" Apr 25 00:01:18.973656 kubelet[3298]: I0425 00:01:18.956123 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bd4aa88-033e-41a0-bd2f-6042f3bf091b-config\") pod \"goldmane-cccfbd5cf-7ctl7\" (UID: \"8bd4aa88-033e-41a0-bd2f-6042f3bf091b\") " pod="calico-system/goldmane-cccfbd5cf-7ctl7" Apr 25 00:01:18.973656 kubelet[3298]: I0425 00:01:18.956144 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8bd4aa88-033e-41a0-bd2f-6042f3bf091b-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-7ctl7\" (UID: \"8bd4aa88-033e-41a0-bd2f-6042f3bf091b\") " pod="calico-system/goldmane-cccfbd5cf-7ctl7" Apr 25 00:01:18.979142 containerd[1719]: time="2026-04-25T00:01:18.979104607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5689597f97-ghq8j,Uid:e1cd18da-eb8a-4963-8091-63c656860ed8,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:19.022038 containerd[1719]: time="2026-04-25T00:01:19.021992159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-gpm6f,Uid:6be2f8f5-c613-46db-be5c-7773acd8f189,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:19.070468 containerd[1719]: time="2026-04-25T00:01:19.070400082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-dgnpd,Uid:6f9f367b-3d37-4263-8e44-7b68c95eb032,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:19.209257 containerd[1719]: time="2026-04-25T00:01:19.209206669Z" level=error msg="Failed to destroy network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:19.209600 containerd[1719]: time="2026-04-25T00:01:19.209560674Z" level=error msg="encountered an error cleaning up failed sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:19.209725 containerd[1719]: time="2026-04-25T00:01:19.209633875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz7wk,Uid:d30beaa3-b48e-4165-93c6-fa00a976739a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:19.210041 kubelet[3298]: E0425 00:01:19.209990 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:19.210497 kubelet[3298]: E0425 00:01:19.210092 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tz7wk" Apr 25 00:01:19.210497 kubelet[3298]: E0425 00:01:19.210125 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tz7wk" Apr 25 00:01:19.210497 kubelet[3298]: E0425 00:01:19.210251 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tz7wk_calico-system(d30beaa3-b48e-4165-93c6-fa00a976739a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tz7wk_calico-system(d30beaa3-b48e-4165-93c6-fa00a976739a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:19.273970 kubelet[3298]: I0425 00:01:19.250349 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:19.277494 containerd[1719]: time="2026-04-25T00:01:19.251015407Z" level=info msg="StopPodSandbox for \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\"" Apr 25 00:01:19.277494 containerd[1719]: time="2026-04-25T00:01:19.251215010Z" level=info msg="Ensure that sandbox 8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d in task-service has been cleanup successfully" Apr 25 00:01:19.281618 containerd[1719]: time="2026-04-25T00:01:19.281266497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7ctl7,Uid:8bd4aa88-033e-41a0-bd2f-6042f3bf091b,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:19.291636 containerd[1719]: time="2026-04-25T00:01:19.291598530Z" level=error msg="StopPodSandbox for \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\" failed" error="failed to destroy network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:19.291802 kubelet[3298]: E0425 00:01:19.291767 3298 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:19.291886 kubelet[3298]: E0425 00:01:19.291821 3298 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d"} Apr 25 00:01:19.291886 kubelet[3298]: E0425 00:01:19.291877 3298 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d30beaa3-b48e-4165-93c6-fa00a976739a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 00:01:19.292030 kubelet[3298]: E0425 00:01:19.291909 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d30beaa3-b48e-4165-93c6-fa00a976739a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tz7wk" podUID="d30beaa3-b48e-4165-93c6-fa00a976739a" Apr 25 00:01:19.367663 containerd[1719]: time="2026-04-25T00:01:19.367628608Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 25 00:01:20.507733 containerd[1719]: time="2026-04-25T00:01:20.507663383Z" level=error msg="Failed to destroy network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:20.509778 containerd[1719]: time="2026-04-25T00:01:20.509660009Z" level=error msg="encountered an error cleaning up failed sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:20.509778 containerd[1719]: time="2026-04-25T00:01:20.509743210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x7pfz,Uid:7f66e989-db5b-4adc-97a4-8927e62e5b7f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:20.510057 kubelet[3298]: E0425 00:01:20.510016 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:20.511280 kubelet[3298]: E0425 00:01:20.510084 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x7pfz" Apr 25 00:01:20.511280 kubelet[3298]: E0425 00:01:20.510112 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-x7pfz" Apr 25 00:01:20.511280 kubelet[3298]: E0425 00:01:20.510181 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-x7pfz_kube-system(7f66e989-db5b-4adc-97a4-8927e62e5b7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-x7pfz_kube-system(7f66e989-db5b-4adc-97a4-8927e62e5b7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x7pfz" podUID="7f66e989-db5b-4adc-97a4-8927e62e5b7f" Apr 25 00:01:20.513106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96-shm.mount: Deactivated successfully. Apr 25 00:01:21.256682 kubelet[3298]: I0425 00:01:21.256645 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:21.257479 containerd[1719]: time="2026-04-25T00:01:21.257263532Z" level=info msg="StopPodSandbox for \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\"" Apr 25 00:01:21.258032 containerd[1719]: time="2026-04-25T00:01:21.257694938Z" level=info msg="Ensure that sandbox 6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96 in task-service has been cleanup successfully" Apr 25 00:01:21.282700 containerd[1719]: time="2026-04-25T00:01:21.282649459Z" level=error msg="StopPodSandbox for \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\" failed" error="failed to destroy network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.282919 kubelet[3298]: E0425 00:01:21.282876 3298 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:21.283015 kubelet[3298]: E0425 00:01:21.282945 3298 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96"} Apr 25 00:01:21.283015 kubelet[3298]: E0425 00:01:21.282985 3298 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f66e989-db5b-4adc-97a4-8927e62e5b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 00:01:21.283138 kubelet[3298]: E0425 00:01:21.283027 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f66e989-db5b-4adc-97a4-8927e62e5b7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-x7pfz" podUID="7f66e989-db5b-4adc-97a4-8927e62e5b7f" Apr 25 00:01:21.363553 containerd[1719]: time="2026-04-25T00:01:21.363496900Z" level=error msg="Failed to destroy network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.363869 containerd[1719]: time="2026-04-25T00:01:21.363833104Z" level=error msg="encountered an error cleaning up failed sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.363976 containerd[1719]: time="2026-04-25T00:01:21.363892305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-gpm6f,Uid:6be2f8f5-c613-46db-be5c-7773acd8f189,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.364206 kubelet[3298]: E0425 00:01:21.364165 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.364290 kubelet[3298]: E0425 00:01:21.364232 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75c69f45f-gpm6f" Apr 25 00:01:21.364290 kubelet[3298]: E0425 00:01:21.364277 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75c69f45f-gpm6f" Apr 25 00:01:21.364381 kubelet[3298]: E0425 00:01:21.364345 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c69f45f-gpm6f_calico-system(6be2f8f5-c613-46db-be5c-7773acd8f189)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c69f45f-gpm6f_calico-system(6be2f8f5-c613-46db-be5c-7773acd8f189)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75c69f45f-gpm6f" podUID="6be2f8f5-c613-46db-be5c-7773acd8f189" Apr 25 00:01:21.517060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b-shm.mount: Deactivated successfully. Apr 25 00:01:21.572594 containerd[1719]: time="2026-04-25T00:01:21.572537891Z" level=error msg="Failed to destroy network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.574756 containerd[1719]: time="2026-04-25T00:01:21.574689818Z" level=error msg="encountered an error cleaning up failed sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.574912 containerd[1719]: time="2026-04-25T00:01:21.574776619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrrd8,Uid:475a688c-c33d-44aa-8f1e-2084fa8ba675,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.575111 kubelet[3298]: E0425 00:01:21.575063 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.576449 kubelet[3298]: E0425 00:01:21.575142 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xrrd8" Apr 25 00:01:21.576449 kubelet[3298]: E0425 00:01:21.575170 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xrrd8" Apr 25 00:01:21.576449 kubelet[3298]: E0425 00:01:21.575235 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xrrd8_kube-system(475a688c-c33d-44aa-8f1e-2084fa8ba675)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xrrd8_kube-system(475a688c-c33d-44aa-8f1e-2084fa8ba675)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xrrd8" podUID="475a688c-c33d-44aa-8f1e-2084fa8ba675" Apr 25 00:01:21.577044 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2-shm.mount: Deactivated successfully. Apr 25 00:01:21.672434 containerd[1719]: time="2026-04-25T00:01:21.670606253Z" level=error msg="Failed to destroy network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.672687 containerd[1719]: time="2026-04-25T00:01:21.672635079Z" level=error msg="encountered an error cleaning up failed sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.672772 containerd[1719]: time="2026-04-25T00:01:21.672731280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd5cc46d5-99qj9,Uid:1e5e98eb-3874-4565-91da-8cf70bd88eeb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.674811 kubelet[3298]: E0425 00:01:21.674625 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.674811 kubelet[3298]: E0425 00:01:21.674721 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd5cc46d5-99qj9" Apr 25 00:01:21.674811 kubelet[3298]: E0425 00:01:21.674753 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd5cc46d5-99qj9" Apr 25 00:01:21.676258 kubelet[3298]: E0425 00:01:21.676176 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bd5cc46d5-99qj9_calico-system(1e5e98eb-3874-4565-91da-8cf70bd88eeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bd5cc46d5-99qj9_calico-system(1e5e98eb-3874-4565-91da-8cf70bd88eeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bd5cc46d5-99qj9" podUID="1e5e98eb-3874-4565-91da-8cf70bd88eeb" Apr 25 00:01:21.680118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d-shm.mount: Deactivated successfully. Apr 25 00:01:21.718727 containerd[1719]: time="2026-04-25T00:01:21.718684572Z" level=error msg="Failed to destroy network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.719047 containerd[1719]: time="2026-04-25T00:01:21.719013276Z" level=error msg="encountered an error cleaning up failed sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.719209 containerd[1719]: time="2026-04-25T00:01:21.719173778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5689597f97-ghq8j,Uid:e1cd18da-eb8a-4963-8091-63c656860ed8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.719601 kubelet[3298]: E0425 00:01:21.719556 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.719716 kubelet[3298]: E0425 00:01:21.719649 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5689597f97-ghq8j" Apr 25 00:01:21.719716 kubelet[3298]: E0425 00:01:21.719680 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5689597f97-ghq8j" Apr 25 00:01:21.719947 kubelet[3298]: E0425 00:01:21.719771 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5689597f97-ghq8j_calico-system(e1cd18da-eb8a-4963-8091-63c656860ed8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5689597f97-ghq8j_calico-system(e1cd18da-eb8a-4963-8091-63c656860ed8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5689597f97-ghq8j" podUID="e1cd18da-eb8a-4963-8091-63c656860ed8" Apr 25 00:01:21.776142 containerd[1719]: time="2026-04-25T00:01:21.776026510Z" level=info msg="CreateContainer within sandbox \"01853a8454f61c1ccb435dee97537b12c61711c6188339aa2a6235b172414f95\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e11c77085110964c45d6371f47115da668c97cbe8b61329c6a19dfbc7a40015f\"" Apr 25 00:01:21.776913 containerd[1719]: time="2026-04-25T00:01:21.776847220Z" level=info msg="StartContainer for \"e11c77085110964c45d6371f47115da668c97cbe8b61329c6a19dfbc7a40015f\"" Apr 25 00:01:21.806855 systemd[1]: Started cri-containerd-e11c77085110964c45d6371f47115da668c97cbe8b61329c6a19dfbc7a40015f.scope - libcontainer container e11c77085110964c45d6371f47115da668c97cbe8b61329c6a19dfbc7a40015f. Apr 25 00:01:21.888843 containerd[1719]: time="2026-04-25T00:01:21.888779561Z" level=error msg="Failed to destroy network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.889202 containerd[1719]: time="2026-04-25T00:01:21.889159266Z" level=error msg="encountered an error cleaning up failed sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.889305 containerd[1719]: time="2026-04-25T00:01:21.889237367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-dgnpd,Uid:6f9f367b-3d37-4263-8e44-7b68c95eb032,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.889778 kubelet[3298]: E0425 00:01:21.889702 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.889899 kubelet[3298]: E0425 00:01:21.889794 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75c69f45f-dgnpd" Apr 25 00:01:21.889899 kubelet[3298]: E0425 00:01:21.889826 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75c69f45f-dgnpd" Apr 25 00:01:21.890521 kubelet[3298]: E0425 00:01:21.889901 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75c69f45f-dgnpd_calico-system(6f9f367b-3d37-4263-8e44-7b68c95eb032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75c69f45f-dgnpd_calico-system(6f9f367b-3d37-4263-8e44-7b68c95eb032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75c69f45f-dgnpd" podUID="6f9f367b-3d37-4263-8e44-7b68c95eb032" Apr 25 00:01:21.920741 containerd[1719]: time="2026-04-25T00:01:21.920673972Z" level=info msg="StartContainer for \"e11c77085110964c45d6371f47115da668c97cbe8b61329c6a19dfbc7a40015f\" returns successfully" Apr 25 00:01:21.964019 containerd[1719]: time="2026-04-25T00:01:21.963946729Z" level=error msg="Failed to destroy network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.964383 containerd[1719]: time="2026-04-25T00:01:21.964348134Z" level=error msg="encountered an error cleaning up failed sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.964515 containerd[1719]: time="2026-04-25T00:01:21.964434835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7ctl7,Uid:8bd4aa88-033e-41a0-bd2f-6042f3bf091b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.964719 kubelet[3298]: E0425 00:01:21.964679 3298 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 00:01:21.964811 kubelet[3298]: E0425 00:01:21.964749 3298 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-7ctl7" Apr 25 00:01:21.964811 kubelet[3298]: E0425 00:01:21.964779 3298 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-7ctl7" Apr 25 00:01:21.964898 kubelet[3298]: E0425 00:01:21.964840 3298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-7ctl7_calico-system(8bd4aa88-033e-41a0-bd2f-6042f3bf091b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-7ctl7_calico-system(8bd4aa88-033e-41a0-bd2f-6042f3bf091b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-7ctl7" podUID="8bd4aa88-033e-41a0-bd2f-6042f3bf091b" Apr 25 00:01:22.265291 kubelet[3298]: I0425 00:01:22.265252 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:22.266929 containerd[1719]: time="2026-04-25T00:01:22.266449223Z" level=info msg="StopPodSandbox for \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\"" Apr 25 00:01:22.266929 containerd[1719]: time="2026-04-25T00:01:22.266651325Z" level=info msg="Ensure that sandbox d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d in task-service has been cleanup successfully" Apr 25 00:01:22.279354 kubelet[3298]: I0425 00:01:22.278335 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:22.281561 containerd[1719]: time="2026-04-25T00:01:22.281523917Z" level=info msg="StopPodSandbox for \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\"" Apr 25 00:01:22.282086 containerd[1719]: time="2026-04-25T00:01:22.282059324Z" level=info msg="Ensure that sandbox 4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b in task-service has been cleanup successfully" Apr 25 00:01:22.296499 kubelet[3298]: I0425 00:01:22.296141 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:22.301609 kubelet[3298]: I0425 00:01:22.297698 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6qjvt" podStartSLOduration=15.652609845 podStartE2EDuration="1m16.297681125s" podCreationTimestamp="2026-04-25 00:00:06 +0000 UTC" firstStartedPulling="2026-04-25 00:00:07.128645376 +0000 UTC m=+18.242234954" lastFinishedPulling="2026-04-25 00:01:07.773716556 +0000 UTC m=+78.887306234" observedRunningTime="2026-04-25 00:01:22.295941202 +0000 UTC m=+93.409530880" watchObservedRunningTime="2026-04-25 00:01:22.297681125 +0000 UTC m=+93.411270703" Apr 25 00:01:22.301819 containerd[1719]: time="2026-04-25T00:01:22.299817552Z" level=info msg="StopPodSandbox for \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\"" Apr 25 00:01:22.301819 containerd[1719]: time="2026-04-25T00:01:22.300102556Z" level=info msg="Ensure that sandbox 11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b in task-service has been cleanup successfully" Apr 25 00:01:22.306277 kubelet[3298]: I0425 00:01:22.306254 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:22.307887 containerd[1719]: time="2026-04-25T00:01:22.307857856Z" level=info msg="StopPodSandbox for \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\"" Apr 25 00:01:22.308169 containerd[1719]: time="2026-04-25T00:01:22.308145559Z" level=info msg="Ensure that sandbox bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb in task-service has been cleanup successfully" Apr 25 00:01:22.320147 kubelet[3298]: I0425 00:01:22.319553 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:22.320701 containerd[1719]: time="2026-04-25T00:01:22.320665921Z" level=info msg="StopPodSandbox for \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\"" Apr 25 00:01:22.321006 containerd[1719]: time="2026-04-25T00:01:22.320979825Z" level=info msg="Ensure that sandbox 72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64 in task-service has been cleanup successfully" Apr 25 00:01:22.324420 kubelet[3298]: I0425 00:01:22.324380 3298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:22.326066 containerd[1719]: time="2026-04-25T00:01:22.326032390Z" level=info msg="StopPodSandbox for \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\"" Apr 25 00:01:22.331354 containerd[1719]: time="2026-04-25T00:01:22.331320258Z" level=info msg="Ensure that sandbox 60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2 in task-service has been cleanup successfully" Apr 25 00:01:22.521501 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b-shm.mount: Deactivated successfully. Apr 25 00:01:22.523925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64-shm.mount: Deactivated successfully. Apr 25 00:01:22.525550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb-shm.mount: Deactivated successfully. Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.533 [INFO][4610] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.534 [INFO][4610] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" iface="eth0" netns="/var/run/netns/cni-a489f911-6713-901b-4311-1778fcead569" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.542 [INFO][4610] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" iface="eth0" netns="/var/run/netns/cni-a489f911-6713-901b-4311-1778fcead569" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.551 [INFO][4610] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" iface="eth0" netns="/var/run/netns/cni-a489f911-6713-901b-4311-1778fcead569" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.551 [INFO][4610] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.551 [INFO][4610] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.633 [INFO][4706] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.633 [INFO][4706] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.634 [INFO][4706] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.646 [WARNING][4706] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.646 [INFO][4706] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.650 [INFO][4706] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:22.665731 containerd[1719]: 2026-04-25 00:01:22.657 [INFO][4610] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:22.668332 containerd[1719]: time="2026-04-25T00:01:22.665973566Z" level=info msg="TearDown network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\" successfully" Apr 25 00:01:22.668332 containerd[1719]: time="2026-04-25T00:01:22.666011166Z" level=info msg="StopPodSandbox for \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\" returns successfully" Apr 25 00:01:22.675818 systemd[1]: run-netns-cni\x2da489f911\x2d6713\x2d901b\x2d4311\x2d1778fcead569.mount: Deactivated successfully. Apr 25 00:01:22.681830 containerd[1719]: time="2026-04-25T00:01:22.681754569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd5cc46d5-99qj9,Uid:1e5e98eb-3874-4565-91da-8cf70bd88eeb,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.532 [INFO][4618] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.532 [INFO][4618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" iface="eth0" netns="/var/run/netns/cni-8742d756-cf9b-2658-5e97-d27649b33a40" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.533 [INFO][4618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" iface="eth0" netns="/var/run/netns/cni-8742d756-cf9b-2658-5e97-d27649b33a40" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.540 [INFO][4618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" iface="eth0" netns="/var/run/netns/cni-8742d756-cf9b-2658-5e97-d27649b33a40" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.540 [INFO][4618] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.541 [INFO][4618] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.659 [INFO][4701] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.659 [INFO][4701] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.659 [INFO][4701] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.680 [WARNING][4701] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.680 [INFO][4701] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.682 [INFO][4701] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:22.691208 containerd[1719]: 2026-04-25 00:01:22.686 [INFO][4618] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:22.695883 containerd[1719]: time="2026-04-25T00:01:22.695844450Z" level=info msg="TearDown network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\" successfully" Apr 25 00:01:22.696021 containerd[1719]: time="2026-04-25T00:01:22.695998452Z" level=info msg="StopPodSandbox for \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\" returns successfully" Apr 25 00:01:22.698575 systemd[1]: run-netns-cni\x2d8742d756\x2dcf9b\x2d2658\x2d5e97\x2dd27649b33a40.mount: Deactivated successfully. Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.593 [INFO][4655] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.593 [INFO][4655] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" iface="eth0" netns="/var/run/netns/cni-64aac210-696b-b5b4-ac89-712856aa9382" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.593 [INFO][4655] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" iface="eth0" netns="/var/run/netns/cni-64aac210-696b-b5b4-ac89-712856aa9382" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.598 [INFO][4655] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" iface="eth0" netns="/var/run/netns/cni-64aac210-696b-b5b4-ac89-712856aa9382" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.598 [INFO][4655] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.598 [INFO][4655] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.669 [INFO][4719] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.670 [INFO][4719] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.682 [INFO][4719] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.692 [WARNING][4719] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.692 [INFO][4719] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.693 [INFO][4719] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:22.703101 containerd[1719]: 2026-04-25 00:01:22.696 [INFO][4655] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:22.707937 containerd[1719]: time="2026-04-25T00:01:22.707048394Z" level=info msg="TearDown network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\" successfully" Apr 25 00:01:22.707937 containerd[1719]: time="2026-04-25T00:01:22.707075395Z" level=info msg="StopPodSandbox for \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\" returns successfully" Apr 25 00:01:22.712493 systemd[1]: run-netns-cni\x2d64aac210\x2d696b\x2db5b4\x2dac89\x2d712856aa9382.mount: Deactivated successfully. Apr 25 00:01:22.718606 containerd[1719]: time="2026-04-25T00:01:22.717206225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7ctl7,Uid:8bd4aa88-033e-41a0-bd2f-6042f3bf091b,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.563 [INFO][4670] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.563 [INFO][4670] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" iface="eth0" netns="/var/run/netns/cni-756456fd-cdfd-62ae-4bd3-993df7e19602" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.565 [INFO][4670] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" iface="eth0" netns="/var/run/netns/cni-756456fd-cdfd-62ae-4bd3-993df7e19602" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.565 [INFO][4670] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" iface="eth0" netns="/var/run/netns/cni-756456fd-cdfd-62ae-4bd3-993df7e19602" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.565 [INFO][4670] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.565 [INFO][4670] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.679 [INFO][4712] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.679 [INFO][4712] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.695 [INFO][4712] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.706 [WARNING][4712] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.706 [INFO][4712] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.708 [INFO][4712] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:22.720195 containerd[1719]: 2026-04-25 00:01:22.712 [INFO][4670] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:22.723637 containerd[1719]: time="2026-04-25T00:01:22.721280677Z" level=info msg="TearDown network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\" successfully" Apr 25 00:01:22.723637 containerd[1719]: time="2026-04-25T00:01:22.721308278Z" level=info msg="StopPodSandbox for \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\" returns successfully" Apr 25 00:01:22.728805 systemd[1]: run-netns-cni\x2d756456fd\x2dcdfd\x2d62ae\x2d4bd3\x2d993df7e19602.mount: Deactivated successfully. Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.594 [INFO][4652] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.594 [INFO][4652] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" iface="eth0" netns="/var/run/netns/cni-09c1a128-0f67-4097-1590-808a809f1c1e" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.594 [INFO][4652] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" iface="eth0" netns="/var/run/netns/cni-09c1a128-0f67-4097-1590-808a809f1c1e" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.600 [INFO][4652] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" iface="eth0" netns="/var/run/netns/cni-09c1a128-0f67-4097-1590-808a809f1c1e" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.600 [INFO][4652] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.600 [INFO][4652] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.726 [INFO][4721] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.727 [INFO][4721] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.727 [INFO][4721] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.737 [WARNING][4721] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.737 [INFO][4721] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.738 [INFO][4721] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:22.742465 containerd[1719]: 2026-04-25 00:01:22.740 [INFO][4652] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:22.744714 containerd[1719]: time="2026-04-25T00:01:22.744686679Z" level=info msg="TearDown network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\" successfully" Apr 25 00:01:22.744916 containerd[1719]: time="2026-04-25T00:01:22.744896081Z" level=info msg="StopPodSandbox for \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\" returns successfully" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.587 [INFO][4671] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.588 [INFO][4671] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" iface="eth0" netns="/var/run/netns/cni-71e818fe-a977-1690-714f-29cc4f87d3d4" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.589 [INFO][4671] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" iface="eth0" netns="/var/run/netns/cni-71e818fe-a977-1690-714f-29cc4f87d3d4" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.590 [INFO][4671] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" iface="eth0" netns="/var/run/netns/cni-71e818fe-a977-1690-714f-29cc4f87d3d4" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.590 [INFO][4671] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.590 [INFO][4671] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.735 [INFO][4717] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.735 [INFO][4717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.738 [INFO][4717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.746 [WARNING][4717] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.746 [INFO][4717] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.747 [INFO][4717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:22.750253 containerd[1719]: 2026-04-25 00:01:22.748 [INFO][4671] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:22.750827 containerd[1719]: time="2026-04-25T00:01:22.750355752Z" level=info msg="TearDown network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\" successfully" Apr 25 00:01:22.750827 containerd[1719]: time="2026-04-25T00:01:22.750377452Z" level=info msg="StopPodSandbox for \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\" returns successfully" Apr 25 00:01:22.763288 containerd[1719]: time="2026-04-25T00:01:22.763060515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-dgnpd,Uid:6f9f367b-3d37-4263-8e44-7b68c95eb032,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:22.993153 kubelet[3298]: I0425 00:01:22.992358 3298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-ca-bundle\") pod \"e1cd18da-eb8a-4963-8091-63c656860ed8\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " Apr 25 00:01:22.993153 kubelet[3298]: I0425 00:01:22.992416 3298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-backend-key-pair\") pod \"e1cd18da-eb8a-4963-8091-63c656860ed8\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " Apr 25 00:01:22.993153 kubelet[3298]: I0425 00:01:22.992448 3298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-nginx-config\") pod \"e1cd18da-eb8a-4963-8091-63c656860ed8\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " Apr 25 00:01:22.993153 kubelet[3298]: I0425 00:01:22.992479 3298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skwn\" (UniqueName: \"kubernetes.io/projected/e1cd18da-eb8a-4963-8091-63c656860ed8-kube-api-access-4skwn\") pod \"e1cd18da-eb8a-4963-8091-63c656860ed8\" (UID: \"e1cd18da-eb8a-4963-8091-63c656860ed8\") " Apr 25 00:01:22.993153 kubelet[3298]: I0425 00:01:22.992847 3298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e1cd18da-eb8a-4963-8091-63c656860ed8" (UID: "e1cd18da-eb8a-4963-8091-63c656860ed8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:01:22.995876 kubelet[3298]: I0425 00:01:22.995840 3298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e1cd18da-eb8a-4963-8091-63c656860ed8" (UID: "e1cd18da-eb8a-4963-8091-63c656860ed8"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:01:22.996135 kubelet[3298]: I0425 00:01:22.996101 3298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1cd18da-eb8a-4963-8091-63c656860ed8-kube-api-access-4skwn" (OuterVolumeSpecName: "kube-api-access-4skwn") pod "e1cd18da-eb8a-4963-8091-63c656860ed8" (UID: "e1cd18da-eb8a-4963-8091-63c656860ed8"). InnerVolumeSpecName "kube-api-access-4skwn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:01:22.997191 kubelet[3298]: I0425 00:01:22.997156 3298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e1cd18da-eb8a-4963-8091-63c656860ed8" (UID: "e1cd18da-eb8a-4963-8091-63c656860ed8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 25 00:01:23.011744 containerd[1719]: time="2026-04-25T00:01:23.011704816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-gpm6f,Uid:6be2f8f5-c613-46db-be5c-7773acd8f189,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:23.063801 containerd[1719]: time="2026-04-25T00:01:23.063744386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrrd8,Uid:475a688c-c33d-44aa-8f1e-2084fa8ba675,Namespace:kube-system,Attempt:1,}" Apr 25 00:01:23.093133 kubelet[3298]: I0425 00:01:23.093085 3298 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-ca-bundle\") on node \"ci-4081.3.6-n-3087b9d021\" DevicePath \"\"" Apr 25 00:01:23.093133 kubelet[3298]: I0425 00:01:23.093124 3298 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1cd18da-eb8a-4963-8091-63c656860ed8-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-3087b9d021\" DevicePath \"\"" Apr 25 00:01:23.093133 kubelet[3298]: I0425 00:01:23.093138 3298 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e1cd18da-eb8a-4963-8091-63c656860ed8-nginx-config\") on node \"ci-4081.3.6-n-3087b9d021\" DevicePath \"\"" Apr 25 00:01:23.093335 kubelet[3298]: I0425 00:01:23.093149 3298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4skwn\" (UniqueName: \"kubernetes.io/projected/e1cd18da-eb8a-4963-8091-63c656860ed8-kube-api-access-4skwn\") on node \"ci-4081.3.6-n-3087b9d021\" DevicePath \"\"" Apr 25 00:01:23.343853 systemd[1]: Removed slice kubepods-besteffort-pode1cd18da_eb8a_4963_8091_63c656860ed8.slice - libcontainer container kubepods-besteffort-pode1cd18da_eb8a_4963_8091_63c656860ed8.slice. Apr 25 00:01:23.513385 systemd[1]: run-netns-cni\x2d71e818fe\x2da977\x2d1690\x2d714f\x2d29cc4f87d3d4.mount: Deactivated successfully. Apr 25 00:01:23.513545 systemd[1]: run-netns-cni\x2d09c1a128\x2d0f67\x2d4097\x2d1590\x2d808a809f1c1e.mount: Deactivated successfully. Apr 25 00:01:23.513634 systemd[1]: var-lib-kubelet-pods-e1cd18da\x2deb8a\x2d4963\x2d8091\x2d63c656860ed8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4skwn.mount: Deactivated successfully. Apr 25 00:01:23.513714 systemd[1]: var-lib-kubelet-pods-e1cd18da\x2deb8a\x2d4963\x2d8091\x2d63c656860ed8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 25 00:01:23.740432 systemd[1]: Created slice kubepods-besteffort-pode66b759e_33e8_402e_b855_e9ae47834fcb.slice - libcontainer container kubepods-besteffort-pode66b759e_33e8_402e_b855_e9ae47834fcb.slice. Apr 25 00:01:23.796489 kubelet[3298]: I0425 00:01:23.796240 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e66b759e-33e8-402e-b855-e9ae47834fcb-whisker-backend-key-pair\") pod \"whisker-6d654f4d94-dm7nx\" (UID: \"e66b759e-33e8-402e-b855-e9ae47834fcb\") " pod="calico-system/whisker-6d654f4d94-dm7nx" Apr 25 00:01:23.796489 kubelet[3298]: I0425 00:01:23.796285 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e66b759e-33e8-402e-b855-e9ae47834fcb-nginx-config\") pod \"whisker-6d654f4d94-dm7nx\" (UID: \"e66b759e-33e8-402e-b855-e9ae47834fcb\") " pod="calico-system/whisker-6d654f4d94-dm7nx" Apr 25 00:01:23.796489 kubelet[3298]: I0425 00:01:23.796315 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e66b759e-33e8-402e-b855-e9ae47834fcb-whisker-ca-bundle\") pod \"whisker-6d654f4d94-dm7nx\" (UID: \"e66b759e-33e8-402e-b855-e9ae47834fcb\") " pod="calico-system/whisker-6d654f4d94-dm7nx" Apr 25 00:01:23.796489 kubelet[3298]: I0425 00:01:23.796371 3298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbhv\" (UniqueName: \"kubernetes.io/projected/e66b759e-33e8-402e-b855-e9ae47834fcb-kube-api-access-5cbhv\") pod \"whisker-6d654f4d94-dm7nx\" (UID: \"e66b759e-33e8-402e-b855-e9ae47834fcb\") " pod="calico-system/whisker-6d654f4d94-dm7nx" Apr 25 00:01:23.837525 kernel: calico-node[4838]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 25 00:01:24.168005 containerd[1719]: time="2026-04-25T00:01:24.167738493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d654f4d94-dm7nx,Uid:e66b759e-33e8-402e-b855-e9ae47834fcb,Namespace:calico-system,Attempt:0,}" Apr 25 00:01:24.840509 systemd-networkd[1354]: vxlan.calico: Link UP Apr 25 00:01:24.840519 systemd-networkd[1354]: vxlan.calico: Gained carrier Apr 25 00:01:24.989244 kubelet[3298]: I0425 00:01:24.989196 3298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1cd18da-eb8a-4963-8091-63c656860ed8" path="/var/lib/kubelet/pods/e1cd18da-eb8a-4963-8091-63c656860ed8/volumes" Apr 25 00:01:25.368916 systemd-networkd[1354]: cali332dd8f895c: Link UP Apr 25 00:01:25.369183 systemd-networkd[1354]: cali332dd8f895c: Gained carrier Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.186 [INFO][4935] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0 calico-kube-controllers-7bd5cc46d5- calico-system 1e5e98eb-3874-4565-91da-8cf70bd88eeb 993 0 2026-04-25 00:00:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bd5cc46d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 calico-kube-controllers-7bd5cc46d5-99qj9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali332dd8f895c [] [] }} ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.186 [INFO][4935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.245 [INFO][4960] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" HandleID="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.253 [INFO][4960] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" HandleID="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"calico-kube-controllers-7bd5cc46d5-99qj9", "timestamp":"2026-04-25 00:01:25.245568347 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b4f20)} Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.253 [INFO][4960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.253 [INFO][4960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.253 [INFO][4960] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.257 [INFO][4960] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.263 [INFO][4960] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.289 [INFO][4960] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.292 [INFO][4960] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.295 [INFO][4960] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.295 [INFO][4960] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.298 [INFO][4960] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189 Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.303 [INFO][4960] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.314 [INFO][4960] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.65/26] block=192.168.33.64/26 handle="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.358 [INFO][4960] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.65/26] handle="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.358 [INFO][4960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:25.398975 containerd[1719]: 2026-04-25 00:01:25.358 [INFO][4960] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.65/26] IPv6=[] ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" HandleID="k8s-pod-network.42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:25.401915 containerd[1719]: 2026-04-25 00:01:25.361 [INFO][4935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0", GenerateName:"calico-kube-controllers-7bd5cc46d5-", Namespace:"calico-system", SelfLink:"", UID:"1e5e98eb-3874-4565-91da-8cf70bd88eeb", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd5cc46d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"calico-kube-controllers-7bd5cc46d5-99qj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali332dd8f895c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.401915 containerd[1719]: 2026-04-25 00:01:25.361 [INFO][4935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.65/32] ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:25.401915 containerd[1719]: 2026-04-25 00:01:25.361 [INFO][4935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali332dd8f895c ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:25.401915 containerd[1719]: 2026-04-25 00:01:25.368 [INFO][4935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:25.401915 containerd[1719]: 2026-04-25 00:01:25.368 [INFO][4935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0", GenerateName:"calico-kube-controllers-7bd5cc46d5-", Namespace:"calico-system", SelfLink:"", UID:"1e5e98eb-3874-4565-91da-8cf70bd88eeb", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd5cc46d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189", Pod:"calico-kube-controllers-7bd5cc46d5-99qj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali332dd8f895c", MAC:"ee:22:13:0e:c4:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.401915 containerd[1719]: 2026-04-25 00:01:25.396 [INFO][4935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189" Namespace="calico-system" Pod="calico-kube-controllers-7bd5cc46d5-99qj9" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:25.437572 systemd-networkd[1354]: cali65bb6a73d1f: Link UP Apr 25 00:01:25.437850 systemd-networkd[1354]: cali65bb6a73d1f: Gained carrier Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.210 [INFO][4945] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0 calico-apiserver-75c69f45f- calico-system 6f9f367b-3d37-4263-8e44-7b68c95eb032 996 0 2026-04-25 00:00:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75c69f45f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 calico-apiserver-75c69f45f-dgnpd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali65bb6a73d1f [] [] }} ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.211 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.260 [INFO][4968] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" HandleID="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.276 [INFO][4968] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" HandleID="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"calico-apiserver-75c69f45f-dgnpd", "timestamp":"2026-04-25 00:01:25.260925344 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001891e0)} Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.276 [INFO][4968] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.359 [INFO][4968] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.359 [INFO][4968] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.363 [INFO][4968] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.382 [INFO][4968] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.404 [INFO][4968] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.406 [INFO][4968] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.408 [INFO][4968] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.408 [INFO][4968] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.410 [INFO][4968] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9 Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.415 [INFO][4968] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.427 [INFO][4968] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.66/26] block=192.168.33.64/26 handle="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.427 [INFO][4968] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.66/26] handle="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.427 [INFO][4968] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:25.458860 containerd[1719]: 2026-04-25 00:01:25.427 [INFO][4968] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.66/26] IPv6=[] ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" HandleID="k8s-pod-network.5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:25.459860 containerd[1719]: 2026-04-25 00:01:25.429 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6f9f367b-3d37-4263-8e44-7b68c95eb032", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"calico-apiserver-75c69f45f-dgnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65bb6a73d1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.459860 containerd[1719]: 2026-04-25 00:01:25.430 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.66/32] ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:25.459860 containerd[1719]: 2026-04-25 00:01:25.430 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65bb6a73d1f ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:25.459860 containerd[1719]: 2026-04-25 00:01:25.432 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:25.459860 containerd[1719]: 2026-04-25 00:01:25.432 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6f9f367b-3d37-4263-8e44-7b68c95eb032", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9", Pod:"calico-apiserver-75c69f45f-dgnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65bb6a73d1f", MAC:"1e:5b:d4:ed:43:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.459860 containerd[1719]: 2026-04-25 00:01:25.454 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-dgnpd" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:25.535997 systemd-networkd[1354]: calia340b6e8035: Link UP Apr 25 00:01:25.537488 systemd-networkd[1354]: calia340b6e8035: Gained carrier Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.323 [INFO][4985] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0 goldmane-cccfbd5cf- calico-system 8bd4aa88-033e-41a0-bd2f-6042f3bf091b 994 0 2026-04-25 00:00:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 goldmane-cccfbd5cf-7ctl7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia340b6e8035 [] [] }} ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.358 [INFO][4985] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.428 [INFO][5001] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" HandleID="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.446 [INFO][5001] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" HandleID="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"goldmane-cccfbd5cf-7ctl7", "timestamp":"2026-04-25 00:01:25.428017592 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ab080)} Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.447 [INFO][5001] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.447 [INFO][5001] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.447 [INFO][5001] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.464 [INFO][5001] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.482 [INFO][5001] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.501 [INFO][5001] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.503 [INFO][5001] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.505 [INFO][5001] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.505 [INFO][5001] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.507 [INFO][5001] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9 Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.512 [INFO][5001] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.521 [INFO][5001] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.67/26] block=192.168.33.64/26 handle="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.521 [INFO][5001] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.67/26] handle="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.521 [INFO][5001] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:25.568801 containerd[1719]: 2026-04-25 00:01:25.521 [INFO][5001] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.67/26] IPv6=[] ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" HandleID="k8s-pod-network.2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:25.570282 containerd[1719]: 2026-04-25 00:01:25.527 [INFO][4985] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8bd4aa88-033e-41a0-bd2f-6042f3bf091b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"goldmane-cccfbd5cf-7ctl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia340b6e8035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.570282 containerd[1719]: 2026-04-25 00:01:25.527 [INFO][4985] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.67/32] ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:25.570282 containerd[1719]: 2026-04-25 00:01:25.527 [INFO][4985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia340b6e8035 ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:25.570282 containerd[1719]: 2026-04-25 00:01:25.538 [INFO][4985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:25.570282 containerd[1719]: 2026-04-25 00:01:25.539 [INFO][4985] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8bd4aa88-033e-41a0-bd2f-6042f3bf091b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9", Pod:"goldmane-cccfbd5cf-7ctl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia340b6e8035", MAC:"96:75:f9:06:78:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.570282 containerd[1719]: 2026-04-25 00:01:25.563 [INFO][4985] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-7ctl7" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:25.644495 systemd-networkd[1354]: cali7637deaf6c6: Link UP Apr 25 00:01:25.644979 systemd-networkd[1354]: cali7637deaf6c6: Gained carrier Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.575 [INFO][5019] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0 calico-apiserver-75c69f45f- calico-system 6be2f8f5-c613-46db-be5c-7773acd8f189 999 0 2026-04-25 00:00:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75c69f45f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 calico-apiserver-75c69f45f-gpm6f eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7637deaf6c6 [] [] }} ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.575 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.598 [INFO][5038] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" HandleID="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.607 [INFO][5038] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" HandleID="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"calico-apiserver-75c69f45f-gpm6f", "timestamp":"2026-04-25 00:01:25.598370481 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b7080)} Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.607 [INFO][5038] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.607 [INFO][5038] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.607 [INFO][5038] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.609 [INFO][5038] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.613 [INFO][5038] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.618 [INFO][5038] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.620 [INFO][5038] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.623 [INFO][5038] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.623 [INFO][5038] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.625 [INFO][5038] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9 Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.632 [INFO][5038] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.638 [INFO][5038] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.68/26] block=192.168.33.64/26 handle="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.638 [INFO][5038] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.68/26] handle="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.638 [INFO][5038] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:25.667828 containerd[1719]: 2026-04-25 00:01:25.638 [INFO][5038] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.68/26] IPv6=[] ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" HandleID="k8s-pod-network.0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:25.671531 containerd[1719]: 2026-04-25 00:01:25.640 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6be2f8f5-c613-46db-be5c-7773acd8f189", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"calico-apiserver-75c69f45f-gpm6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7637deaf6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.671531 containerd[1719]: 2026-04-25 00:01:25.640 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.68/32] ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:25.671531 containerd[1719]: 2026-04-25 00:01:25.640 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7637deaf6c6 ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:25.671531 containerd[1719]: 2026-04-25 00:01:25.645 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:25.671531 containerd[1719]: 2026-04-25 00:01:25.646 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6be2f8f5-c613-46db-be5c-7773acd8f189", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9", Pod:"calico-apiserver-75c69f45f-gpm6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7637deaf6c6", MAC:"a2:0c:ec:4d:33:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:25.671531 containerd[1719]: 2026-04-25 00:01:25.658 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9" Namespace="calico-system" Pod="calico-apiserver-75c69f45f-gpm6f" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:25.981672 systemd-networkd[1354]: vxlan.calico: Gained IPv6LL Apr 25 00:01:26.182189 containerd[1719]: time="2026-04-25T00:01:26.181897981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:26.182189 containerd[1719]: time="2026-04-25T00:01:26.181973482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:26.182189 containerd[1719]: time="2026-04-25T00:01:26.181994182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:26.182465 containerd[1719]: time="2026-04-25T00:01:26.182071583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:26.222565 systemd[1]: Started cri-containerd-42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189.scope - libcontainer container 42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189. Apr 25 00:01:26.259667 containerd[1719]: time="2026-04-25T00:01:26.259581280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd5cc46d5-99qj9,Uid:1e5e98eb-3874-4565-91da-8cf70bd88eeb,Namespace:calico-system,Attempt:1,} returns sandbox id \"42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189\"" Apr 25 00:01:26.262238 containerd[1719]: time="2026-04-25T00:01:26.262032011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 25 00:01:26.493967 systemd-networkd[1354]: cali332dd8f895c: Gained IPv6LL Apr 25 00:01:26.750358 systemd-networkd[1354]: cali65bb6a73d1f: Gained IPv6LL Apr 25 00:01:26.813792 systemd-networkd[1354]: cali7637deaf6c6: Gained IPv6LL Apr 25 00:01:27.069645 systemd-networkd[1354]: calia340b6e8035: Gained IPv6LL Apr 25 00:01:27.558810 systemd-networkd[1354]: cali1e30e995580: Link UP Apr 25 00:01:27.559016 systemd-networkd[1354]: cali1e30e995580: Gained carrier Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.492 [INFO][5145] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0 coredns-66bc5c9577- kube-system 475a688c-c33d-44aa-8f1e-2084fa8ba675 997 0 2026-04-24 23:59:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 coredns-66bc5c9577-xrrd8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e30e995580 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.492 [INFO][5145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.516 [INFO][5157] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" HandleID="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.523 [INFO][5157] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" HandleID="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"coredns-66bc5c9577-xrrd8", "timestamp":"2026-04-25 00:01:27.516955241 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188f20)} Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.523 [INFO][5157] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.523 [INFO][5157] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.523 [INFO][5157] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.526 [INFO][5157] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.529 [INFO][5157] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.533 [INFO][5157] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.535 [INFO][5157] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.537 [INFO][5157] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.537 [INFO][5157] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.538 [INFO][5157] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392 Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.545 [INFO][5157] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.550 [INFO][5157] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.69/26] block=192.168.33.64/26 handle="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.550 [INFO][5157] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.69/26] handle="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.551 [INFO][5157] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:27.576719 containerd[1719]: 2026-04-25 00:01:27.551 [INFO][5157] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.69/26] IPv6=[] ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" HandleID="k8s-pod-network.60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:27.579380 containerd[1719]: 2026-04-25 00:01:27.553 [INFO][5145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"475a688c-c33d-44aa-8f1e-2084fa8ba675", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"coredns-66bc5c9577-xrrd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e30e995580", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:27.579380 containerd[1719]: 2026-04-25 00:01:27.554 [INFO][5145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.69/32] ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:27.579380 containerd[1719]: 2026-04-25 00:01:27.554 [INFO][5145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e30e995580 ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:27.579380 containerd[1719]: 2026-04-25 00:01:27.556 [INFO][5145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:27.579380 containerd[1719]: 2026-04-25 00:01:27.556 [INFO][5145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"475a688c-c33d-44aa-8f1e-2084fa8ba675", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392", Pod:"coredns-66bc5c9577-xrrd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e30e995580", MAC:"72:75:55:42:cf:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:27.580454 containerd[1719]: 2026-04-25 00:01:27.572 [INFO][5145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392" Namespace="kube-system" Pod="coredns-66bc5c9577-xrrd8" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:28.170722 systemd[1]: Started sshd@7-10.0.0.19:22-4.175.71.9:44540.service - OpenSSH per-connection server daemon (4.175.71.9:44540). Apr 25 00:01:28.262830 containerd[1719]: time="2026-04-25T00:01:28.259849489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:28.263345 containerd[1719]: time="2026-04-25T00:01:28.261114606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:28.263345 containerd[1719]: time="2026-04-25T00:01:28.261137406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.263345 containerd[1719]: time="2026-04-25T00:01:28.263100431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.311437 sshd[5192]: Accepted publickey for core from 4.175.71.9 port 44540 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:28.318261 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:28.330574 systemd[1]: Started cri-containerd-5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9.scope - libcontainer container 5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9. Apr 25 00:01:28.344108 systemd-logind[1702]: New session 10 of user core. Apr 25 00:01:28.347575 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 25 00:01:28.352024 containerd[1719]: time="2026-04-25T00:01:28.351344265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:28.355026 containerd[1719]: time="2026-04-25T00:01:28.354966912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:28.357758 containerd[1719]: time="2026-04-25T00:01:28.355034713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.357758 containerd[1719]: time="2026-04-25T00:01:28.355152814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.404459 containerd[1719]: time="2026-04-25T00:01:28.402204919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:28.404711 containerd[1719]: time="2026-04-25T00:01:28.404678651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:28.404831 containerd[1719]: time="2026-04-25T00:01:28.404809852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.405455 containerd[1719]: time="2026-04-25T00:01:28.405412260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.435741 systemd[1]: Started cri-containerd-2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9.scope - libcontainer container 2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9. Apr 25 00:01:28.461587 systemd[1]: Started cri-containerd-0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9.scope - libcontainer container 0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9. Apr 25 00:01:28.543069 containerd[1719]: time="2026-04-25T00:01:28.542785426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:28.543069 containerd[1719]: time="2026-04-25T00:01:28.542853327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:28.543069 containerd[1719]: time="2026-04-25T00:01:28.542874427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.543069 containerd[1719]: time="2026-04-25T00:01:28.542963128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.594774 systemd[1]: Started cri-containerd-60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392.scope - libcontainer container 60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392. Apr 25 00:01:28.653786 sshd[5192]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:28.662171 systemd-logind[1702]: Session 10 logged out. Waiting for processes to exit. Apr 25 00:01:28.664220 systemd[1]: sshd@7-10.0.0.19:22-4.175.71.9:44540.service: Deactivated successfully. Apr 25 00:01:28.675494 systemd[1]: session-10.scope: Deactivated successfully. Apr 25 00:01:28.683255 systemd-logind[1702]: Removed session 10. Apr 25 00:01:28.691770 containerd[1719]: time="2026-04-25T00:01:28.691651539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-gpm6f,Uid:6be2f8f5-c613-46db-be5c-7773acd8f189,Namespace:calico-system,Attempt:1,} returns sandbox id \"0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9\"" Apr 25 00:01:28.715941 containerd[1719]: time="2026-04-25T00:01:28.715895351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrrd8,Uid:475a688c-c33d-44aa-8f1e-2084fa8ba675,Namespace:kube-system,Attempt:1,} returns sandbox id \"60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392\"" Apr 25 00:01:28.732944 containerd[1719]: time="2026-04-25T00:01:28.732900669Z" level=info msg="CreateContainer within sandbox \"60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:01:28.760383 containerd[1719]: time="2026-04-25T00:01:28.760298522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75c69f45f-dgnpd,Uid:6f9f367b-3d37-4263-8e44-7b68c95eb032,Namespace:calico-system,Attempt:1,} returns sandbox id \"5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9\"" Apr 25 00:01:28.786027 containerd[1719]: time="2026-04-25T00:01:28.785560946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-7ctl7,Uid:8bd4aa88-033e-41a0-bd2f-6042f3bf091b,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9\"" Apr 25 00:01:28.791623 systemd-networkd[1354]: cali97a7aa6860b: Link UP Apr 25 00:01:28.794036 systemd-networkd[1354]: cali97a7aa6860b: Gained carrier Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.639 [INFO][5320] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0 whisker-6d654f4d94- calico-system e66b759e-33e8-402e-b855-e9ae47834fcb 1017 0 2026-04-25 00:01:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d654f4d94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 whisker-6d654f4d94-dm7nx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali97a7aa6860b [] [] }} ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.640 [INFO][5320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.703 [INFO][5360] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" HandleID="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.714 [INFO][5360] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" HandleID="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002777f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"whisker-6d654f4d94-dm7nx", "timestamp":"2026-04-25 00:01:28.703196488 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003acf20)} Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.714 [INFO][5360] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.715 [INFO][5360] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.715 [INFO][5360] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.718 [INFO][5360] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.725 [INFO][5360] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.736 [INFO][5360] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.740 [INFO][5360] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.748 [INFO][5360] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.748 [INFO][5360] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.751 [INFO][5360] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887 Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.764 [INFO][5360] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.778 [INFO][5360] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.70/26] block=192.168.33.64/26 handle="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.780 [INFO][5360] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.70/26] handle="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.780 [INFO][5360] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:28.818777 containerd[1719]: 2026-04-25 00:01:28.780 [INFO][5360] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.70/26] IPv6=[] ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" HandleID="k8s-pod-network.4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" Apr 25 00:01:28.820229 containerd[1719]: 2026-04-25 00:01:28.785 [INFO][5320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0", GenerateName:"whisker-6d654f4d94-", Namespace:"calico-system", SelfLink:"", UID:"e66b759e-33e8-402e-b855-e9ae47834fcb", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d654f4d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"whisker-6d654f4d94-dm7nx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali97a7aa6860b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:28.820229 containerd[1719]: 2026-04-25 00:01:28.785 [INFO][5320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.70/32] ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" Apr 25 00:01:28.820229 containerd[1719]: 2026-04-25 00:01:28.785 [INFO][5320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97a7aa6860b ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" Apr 25 00:01:28.820229 containerd[1719]: 2026-04-25 00:01:28.792 [INFO][5320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" Apr 25 00:01:28.820229 containerd[1719]: 2026-04-25 00:01:28.794 [INFO][5320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0", GenerateName:"whisker-6d654f4d94-", Namespace:"calico-system", SelfLink:"", UID:"e66b759e-33e8-402e-b855-e9ae47834fcb", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d654f4d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887", Pod:"whisker-6d654f4d94-dm7nx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali97a7aa6860b", MAC:"52:bd:e7:a6:5e:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:28.820229 containerd[1719]: 2026-04-25 00:01:28.814 [INFO][5320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887" Namespace="calico-system" Pod="whisker-6d654f4d94-dm7nx" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--6d654f4d94--dm7nx-eth0" Apr 25 00:01:28.953823 containerd[1719]: time="2026-04-25T00:01:28.951013673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:28.953823 containerd[1719]: time="2026-04-25T00:01:28.951138774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:28.953823 containerd[1719]: time="2026-04-25T00:01:28.951176275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.953823 containerd[1719]: time="2026-04-25T00:01:28.951432878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:28.971963 systemd[1]: Started cri-containerd-4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887.scope - libcontainer container 4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887. Apr 25 00:01:29.014053 containerd[1719]: time="2026-04-25T00:01:29.014015883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d654f4d94-dm7nx,Uid:e66b759e-33e8-402e-b855-e9ae47834fcb,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887\"" Apr 25 00:01:29.117646 systemd-networkd[1354]: cali1e30e995580: Gained IPv6LL Apr 25 00:01:29.523162 containerd[1719]: time="2026-04-25T00:01:29.523113726Z" level=info msg="CreateContainer within sandbox \"60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a68b94a450765fca618eac4807ad73dd1fb701629ba625a7c24c857240c7a075\"" Apr 25 00:01:29.523909 containerd[1719]: time="2026-04-25T00:01:29.523795935Z" level=info msg="StartContainer for \"a68b94a450765fca618eac4807ad73dd1fb701629ba625a7c24c857240c7a075\"" Apr 25 00:01:29.590958 systemd[1]: run-containerd-runc-k8s.io-a68b94a450765fca618eac4807ad73dd1fb701629ba625a7c24c857240c7a075-runc.Ukp9IT.mount: Deactivated successfully. Apr 25 00:01:29.599847 systemd[1]: Started cri-containerd-a68b94a450765fca618eac4807ad73dd1fb701629ba625a7c24c857240c7a075.scope - libcontainer container a68b94a450765fca618eac4807ad73dd1fb701629ba625a7c24c857240c7a075. Apr 25 00:01:29.663977 containerd[1719]: time="2026-04-25T00:01:29.663922136Z" level=info msg="StartContainer for \"a68b94a450765fca618eac4807ad73dd1fb701629ba625a7c24c857240c7a075\" returns successfully" Apr 25 00:01:30.271223 systemd-networkd[1354]: cali97a7aa6860b: Gained IPv6LL Apr 25 00:01:30.374441 kubelet[3298]: I0425 00:01:30.374356 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xrrd8" podStartSLOduration=96.374333767 podStartE2EDuration="1m36.374333767s" podCreationTimestamp="2026-04-24 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:01:30.374111764 +0000 UTC m=+101.487701342" watchObservedRunningTime="2026-04-25 00:01:30.374333767 +0000 UTC m=+101.487923445" Apr 25 00:01:31.986280 containerd[1719]: time="2026-04-25T00:01:31.985216072Z" level=info msg="StopPodSandbox for \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\"" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.034 [INFO][5541] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.035 [INFO][5541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" iface="eth0" netns="/var/run/netns/cni-42ea99b3-bc0c-e70c-3152-d605d9732bec" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.035 [INFO][5541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" iface="eth0" netns="/var/run/netns/cni-42ea99b3-bc0c-e70c-3152-d605d9732bec" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.035 [INFO][5541] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" iface="eth0" netns="/var/run/netns/cni-42ea99b3-bc0c-e70c-3152-d605d9732bec" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.035 [INFO][5541] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.036 [INFO][5541] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.057 [INFO][5549] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.057 [INFO][5549] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.057 [INFO][5549] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.066 [WARNING][5549] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.066 [INFO][5549] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.067 [INFO][5549] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:32.070273 containerd[1719]: 2026-04-25 00:01:32.068 [INFO][5541] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:32.072904 containerd[1719]: time="2026-04-25T00:01:32.071698586Z" level=info msg="TearDown network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\" successfully" Apr 25 00:01:32.072904 containerd[1719]: time="2026-04-25T00:01:32.071786888Z" level=info msg="StopPodSandbox for \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\" returns successfully" Apr 25 00:01:32.075994 systemd[1]: run-netns-cni\x2d42ea99b3\x2dbc0c\x2de70c\x2d3152\x2dd605d9732bec.mount: Deactivated successfully. Apr 25 00:01:33.469176 containerd[1719]: time="2026-04-25T00:01:33.469128811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz7wk,Uid:d30beaa3-b48e-4165-93c6-fa00a976739a,Namespace:calico-system,Attempt:1,}" Apr 25 00:01:33.683528 systemd[1]: Started sshd@8-10.0.0.19:22-4.175.71.9:44552.service - OpenSSH per-connection server daemon (4.175.71.9:44552). Apr 25 00:01:33.803730 sshd[5560]: Accepted publickey for core from 4.175.71.9 port 44552 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:33.806560 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:33.815981 systemd-logind[1702]: New session 11 of user core. Apr 25 00:01:33.818576 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 25 00:01:33.987436 containerd[1719]: time="2026-04-25T00:01:33.986799088Z" level=info msg="StopPodSandbox for \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\"" Apr 25 00:01:34.065323 sshd[5560]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:34.072630 systemd[1]: sshd@8-10.0.0.19:22-4.175.71.9:44552.service: Deactivated successfully. Apr 25 00:01:34.078990 systemd[1]: session-11.scope: Deactivated successfully. Apr 25 00:01:34.080718 systemd-logind[1702]: Session 11 logged out. Waiting for processes to exit. Apr 25 00:01:34.084624 systemd-logind[1702]: Removed session 11. Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.086 [INFO][5594] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.086 [INFO][5594] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" iface="eth0" netns="/var/run/netns/cni-2b6e36e1-e248-f060-b19b-9922c18c3df1" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.086 [INFO][5594] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" iface="eth0" netns="/var/run/netns/cni-2b6e36e1-e248-f060-b19b-9922c18c3df1" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.086 [INFO][5594] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" iface="eth0" netns="/var/run/netns/cni-2b6e36e1-e248-f060-b19b-9922c18c3df1" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.086 [INFO][5594] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.087 [INFO][5594] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.141 [INFO][5605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.141 [INFO][5605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.141 [INFO][5605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.151 [WARNING][5605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.151 [INFO][5605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.153 [INFO][5605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:34.160004 containerd[1719]: 2026-04-25 00:01:34.155 [INFO][5594] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:34.164541 containerd[1719]: time="2026-04-25T00:01:34.162523455Z" level=info msg="TearDown network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\" successfully" Apr 25 00:01:34.164541 containerd[1719]: time="2026-04-25T00:01:34.162564656Z" level=info msg="StopPodSandbox for \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\" returns successfully" Apr 25 00:01:34.165927 systemd[1]: run-netns-cni\x2d2b6e36e1\x2de248\x2df060\x2db19b\x2d9922c18c3df1.mount: Deactivated successfully. Apr 25 00:01:34.173586 containerd[1719]: time="2026-04-25T00:01:34.173549897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x7pfz,Uid:7f66e989-db5b-4adc-97a4-8927e62e5b7f,Namespace:kube-system,Attempt:1,}" Apr 25 00:01:34.240051 systemd-networkd[1354]: calid77e43ac0d8: Link UP Apr 25 00:01:34.241106 systemd-networkd[1354]: calid77e43ac0d8: Gained carrier Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.109 [INFO][5575] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0 csi-node-driver- calico-system d30beaa3-b48e-4165-93c6-fa00a976739a 1115 0 2026-04-25 00:00:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 csi-node-driver-tz7wk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid77e43ac0d8 [] [] }} ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.109 [INFO][5575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.177 [INFO][5612] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" HandleID="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.187 [INFO][5612] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" HandleID="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380030), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"csi-node-driver-tz7wk", "timestamp":"2026-04-25 00:01:34.177120143 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000df080)} Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.187 [INFO][5612] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.187 [INFO][5612] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.187 [INFO][5612] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.190 [INFO][5612] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.196 [INFO][5612] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.203 [INFO][5612] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.205 [INFO][5612] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.210 [INFO][5612] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.210 [INFO][5612] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.212 [INFO][5612] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0 Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.219 [INFO][5612] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.231 [INFO][5612] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.71/26] block=192.168.33.64/26 handle="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.231 [INFO][5612] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.71/26] handle="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.231 [INFO][5612] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:34.275775 containerd[1719]: 2026-04-25 00:01:34.232 [INFO][5612] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.71/26] IPv6=[] ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" HandleID="k8s-pod-network.395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:34.277268 containerd[1719]: 2026-04-25 00:01:34.235 [INFO][5575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d30beaa3-b48e-4165-93c6-fa00a976739a", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"csi-node-driver-tz7wk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid77e43ac0d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:34.277268 containerd[1719]: 2026-04-25 00:01:34.235 [INFO][5575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.71/32] ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:34.277268 containerd[1719]: 2026-04-25 00:01:34.236 [INFO][5575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid77e43ac0d8 ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:34.277268 containerd[1719]: 2026-04-25 00:01:34.239 [INFO][5575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:34.277268 containerd[1719]: 2026-04-25 00:01:34.240 [INFO][5575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d30beaa3-b48e-4165-93c6-fa00a976739a", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0", Pod:"csi-node-driver-tz7wk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid77e43ac0d8", MAC:"d2:39:ba:20:cd:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:34.277268 containerd[1719]: 2026-04-25 00:01:34.267 [INFO][5575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0" Namespace="calico-system" Pod="csi-node-driver-tz7wk" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:34.433887 containerd[1719]: time="2026-04-25T00:01:34.433753953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:34.434253 containerd[1719]: time="2026-04-25T00:01:34.433844055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:34.434253 containerd[1719]: time="2026-04-25T00:01:34.433868955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:34.434616 containerd[1719]: time="2026-04-25T00:01:34.434365961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:34.477566 systemd[1]: Started cri-containerd-395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0.scope - libcontainer container 395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0. Apr 25 00:01:34.517110 containerd[1719]: time="2026-04-25T00:01:34.516585522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz7wk,Uid:d30beaa3-b48e-4165-93c6-fa00a976739a,Namespace:calico-system,Attempt:1,} returns sandbox id \"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0\"" Apr 25 00:01:34.776149 systemd-networkd[1354]: cali1712536dbf4: Link UP Apr 25 00:01:34.780326 systemd-networkd[1354]: cali1712536dbf4: Gained carrier Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.670 [INFO][5685] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0 coredns-66bc5c9577- kube-system 7f66e989-db5b-4adc-97a4-8927e62e5b7f 1127 0 2026-04-24 23:59:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-3087b9d021 coredns-66bc5c9577-x7pfz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1712536dbf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.670 [INFO][5685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.709 [INFO][5697] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" HandleID="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.719 [INFO][5697] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" HandleID="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-3087b9d021", "pod":"coredns-66bc5c9577-x7pfz", "timestamp":"2026-04-25 00:01:34.709024304 +0000 UTC"}, Hostname:"ci-4081.3.6-n-3087b9d021", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003bfa20)} Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.719 [INFO][5697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.719 [INFO][5697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.719 [INFO][5697] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-3087b9d021' Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.723 [INFO][5697] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.728 [INFO][5697] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.733 [INFO][5697] ipam/ipam.go 526: Trying affinity for 192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.735 [INFO][5697] ipam/ipam.go 160: Attempting to load block cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.738 [INFO][5697] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.738 [INFO][5697] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.740 [INFO][5697] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40 Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.747 [INFO][5697] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.760 [INFO][5697] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.33.72/26] block=192.168.33.64/26 handle="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.761 [INFO][5697] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.33.72/26] handle="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" host="ci-4081.3.6-n-3087b9d021" Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.761 [INFO][5697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:34.809340 containerd[1719]: 2026-04-25 00:01:34.761 [INFO][5697] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.33.72/26] IPv6=[] ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" HandleID="k8s-pod-network.d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.810233 containerd[1719]: 2026-04-25 00:01:34.767 [INFO][5685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7f66e989-db5b-4adc-97a4-8927e62e5b7f", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"", Pod:"coredns-66bc5c9577-x7pfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1712536dbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:34.810233 containerd[1719]: 2026-04-25 00:01:34.767 [INFO][5685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.72/32] ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.810233 containerd[1719]: 2026-04-25 00:01:34.767 [INFO][5685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1712536dbf4 ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.810233 containerd[1719]: 2026-04-25 00:01:34.780 [INFO][5685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.810233 containerd[1719]: 2026-04-25 00:01:34.782 [INFO][5685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7f66e989-db5b-4adc-97a4-8927e62e5b7f", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40", Pod:"coredns-66bc5c9577-x7pfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1712536dbf4", MAC:"be:de:58:14:e2:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:34.810587 containerd[1719]: 2026-04-25 00:01:34.802 [INFO][5685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40" Namespace="kube-system" Pod="coredns-66bc5c9577-x7pfz" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:34.863442 containerd[1719]: time="2026-04-25T00:01:34.862488084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:34.864154 containerd[1719]: time="2026-04-25T00:01:34.864108404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:34.864330 containerd[1719]: time="2026-04-25T00:01:34.864301007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:34.868127 containerd[1719]: time="2026-04-25T00:01:34.866852240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:34.899003 systemd[1]: Started cri-containerd-d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40.scope - libcontainer container d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40. Apr 25 00:01:35.008507 containerd[1719]: time="2026-04-25T00:01:35.008387765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x7pfz,Uid:7f66e989-db5b-4adc-97a4-8927e62e5b7f,Namespace:kube-system,Attempt:1,} returns sandbox id \"d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40\"" Apr 25 00:01:35.018745 containerd[1719]: time="2026-04-25T00:01:35.018615997Z" level=info msg="CreateContainer within sandbox \"d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:01:35.372238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221890206.mount: Deactivated successfully. Apr 25 00:01:35.567860 containerd[1719]: time="2026-04-25T00:01:35.567737680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:35.619303 containerd[1719]: time="2026-04-25T00:01:35.619255045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 25 00:01:35.663902 containerd[1719]: time="2026-04-25T00:01:35.663788319Z" level=info msg="CreateContainer within sandbox \"d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a63a51c56f23de7b3e5a7954a873708625b33b0f5d019c05fff375bd6f49ed7\"" Apr 25 00:01:35.664853 containerd[1719]: time="2026-04-25T00:01:35.664513028Z" level=info msg="StartContainer for \"9a63a51c56f23de7b3e5a7954a873708625b33b0f5d019c05fff375bd6f49ed7\"" Apr 25 00:01:35.702773 systemd[1]: Started cri-containerd-9a63a51c56f23de7b3e5a7954a873708625b33b0f5d019c05fff375bd6f49ed7.scope - libcontainer container 9a63a51c56f23de7b3e5a7954a873708625b33b0f5d019c05fff375bd6f49ed7. Apr 25 00:01:35.708759 containerd[1719]: time="2026-04-25T00:01:35.708558197Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:35.714555 containerd[1719]: time="2026-04-25T00:01:35.714517973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:35.715365 containerd[1719]: time="2026-04-25T00:01:35.715331784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 9.453259372s" Apr 25 00:01:35.715561 containerd[1719]: time="2026-04-25T00:01:35.715478486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 25 00:01:35.717927 containerd[1719]: time="2026-04-25T00:01:35.717899417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 25 00:01:35.822476 containerd[1719]: time="2026-04-25T00:01:35.822325164Z" level=info msg="StartContainer for \"9a63a51c56f23de7b3e5a7954a873708625b33b0f5d019c05fff375bd6f49ed7\" returns successfully" Apr 25 00:01:35.823396 containerd[1719]: time="2026-04-25T00:01:35.823350277Z" level=info msg="CreateContainer within sandbox \"42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 25 00:01:35.965628 systemd-networkd[1354]: calid77e43ac0d8: Gained IPv6LL Apr 25 00:01:36.222158 containerd[1719]: time="2026-04-25T00:01:36.221678615Z" level=info msg="CreateContainer within sandbox \"42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"000be70c3a4d23b6a6b1032ecf1906d3122078ac60f69a117da6f7f654a74db3\"" Apr 25 00:01:36.222706 containerd[1719]: time="2026-04-25T00:01:36.222643228Z" level=info msg="StartContainer for \"000be70c3a4d23b6a6b1032ecf1906d3122078ac60f69a117da6f7f654a74db3\"" Apr 25 00:01:36.269582 systemd[1]: Started cri-containerd-000be70c3a4d23b6a6b1032ecf1906d3122078ac60f69a117da6f7f654a74db3.scope - libcontainer container 000be70c3a4d23b6a6b1032ecf1906d3122078ac60f69a117da6f7f654a74db3. Apr 25 00:01:36.324204 containerd[1719]: time="2026-04-25T00:01:36.324155237Z" level=info msg="StartContainer for \"000be70c3a4d23b6a6b1032ecf1906d3122078ac60f69a117da6f7f654a74db3\" returns successfully" Apr 25 00:01:36.414821 systemd-networkd[1354]: cali1712536dbf4: Gained IPv6LL Apr 25 00:01:36.446846 kubelet[3298]: I0425 00:01:36.445120 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bd5cc46d5-99qj9" podStartSLOduration=80.990040401 podStartE2EDuration="1m30.445099297s" podCreationTimestamp="2026-04-25 00:00:06 +0000 UTC" firstStartedPulling="2026-04-25 00:01:26.261491804 +0000 UTC m=+97.375081382" lastFinishedPulling="2026-04-25 00:01:35.7165507 +0000 UTC m=+106.830140278" observedRunningTime="2026-04-25 00:01:36.409841542 +0000 UTC m=+107.523431220" watchObservedRunningTime="2026-04-25 00:01:36.445099297 +0000 UTC m=+107.558688875" Apr 25 00:01:37.424607 systemd[1]: run-containerd-runc-k8s.io-000be70c3a4d23b6a6b1032ecf1906d3122078ac60f69a117da6f7f654a74db3-runc.gtfF0V.mount: Deactivated successfully. Apr 25 00:01:37.467336 kubelet[3298]: I0425 00:01:37.465821 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x7pfz" podStartSLOduration=103.465798662 podStartE2EDuration="1m43.465798662s" podCreationTimestamp="2026-04-24 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:01:36.448110936 +0000 UTC m=+107.561700614" watchObservedRunningTime="2026-04-25 00:01:37.465798662 +0000 UTC m=+108.579388240" Apr 25 00:01:39.097993 systemd[1]: Started sshd@9-10.0.0.19:22-4.175.71.9:47152.service - OpenSSH per-connection server daemon (4.175.71.9:47152). Apr 25 00:01:39.209445 sshd[5917]: Accepted publickey for core from 4.175.71.9 port 47152 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:39.210601 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:39.215544 systemd-logind[1702]: New session 12 of user core. Apr 25 00:01:39.220829 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 25 00:01:39.369354 sshd[5917]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:39.372756 systemd[1]: sshd@9-10.0.0.19:22-4.175.71.9:47152.service: Deactivated successfully. Apr 25 00:01:39.375247 systemd[1]: session-12.scope: Deactivated successfully. Apr 25 00:01:39.377179 systemd-logind[1702]: Session 12 logged out. Waiting for processes to exit. Apr 25 00:01:39.378486 systemd-logind[1702]: Removed session 12. Apr 25 00:01:42.269639 containerd[1719]: time="2026-04-25T00:01:42.269573966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:42.317965 containerd[1719]: time="2026-04-25T00:01:42.317668787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 25 00:01:42.322142 containerd[1719]: time="2026-04-25T00:01:42.321833641Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:42.365020 containerd[1719]: time="2026-04-25T00:01:42.364847697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:42.366448 containerd[1719]: time="2026-04-25T00:01:42.366033712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 6.648091995s" Apr 25 00:01:42.366448 containerd[1719]: time="2026-04-25T00:01:42.366076912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 25 00:01:42.367761 containerd[1719]: time="2026-04-25T00:01:42.367475730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 25 00:01:42.414584 containerd[1719]: time="2026-04-25T00:01:42.414548838Z" level=info msg="CreateContainer within sandbox \"0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 00:01:42.713143 containerd[1719]: time="2026-04-25T00:01:42.713090695Z" level=info msg="CreateContainer within sandbox \"0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22649a8e1a0700d9945e7740ac1b8d253e47df39c1c84938f16104eb19298959\"" Apr 25 00:01:42.713943 containerd[1719]: time="2026-04-25T00:01:42.713829404Z" level=info msg="StartContainer for \"22649a8e1a0700d9945e7740ac1b8d253e47df39c1c84938f16104eb19298959\"" Apr 25 00:01:42.760594 systemd[1]: Started cri-containerd-22649a8e1a0700d9945e7740ac1b8d253e47df39c1c84938f16104eb19298959.scope - libcontainer container 22649a8e1a0700d9945e7740ac1b8d253e47df39c1c84938f16104eb19298959. Apr 25 00:01:42.870270 containerd[1719]: time="2026-04-25T00:01:42.870218524Z" level=info msg="StartContainer for \"22649a8e1a0700d9945e7740ac1b8d253e47df39c1c84938f16104eb19298959\" returns successfully" Apr 25 00:01:43.647429 kubelet[3298]: I0425 00:01:43.646316 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-75c69f45f-gpm6f" podStartSLOduration=84.974655712 podStartE2EDuration="1m38.646294949s" podCreationTimestamp="2026-04-25 00:00:05 +0000 UTC" firstStartedPulling="2026-04-25 00:01:28.69558979 +0000 UTC m=+99.809179368" lastFinishedPulling="2026-04-25 00:01:42.367229027 +0000 UTC m=+113.480818605" observedRunningTime="2026-04-25 00:01:43.432180483 +0000 UTC m=+114.545770061" watchObservedRunningTime="2026-04-25 00:01:43.646294949 +0000 UTC m=+114.759884627" Apr 25 00:01:44.400721 systemd[1]: Started sshd@10-10.0.0.19:22-4.175.71.9:47160.service - OpenSSH per-connection server daemon (4.175.71.9:47160). Apr 25 00:01:44.510806 sshd[5985]: Accepted publickey for core from 4.175.71.9 port 47160 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:44.512259 sshd[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:44.516576 systemd-logind[1702]: New session 13 of user core. Apr 25 00:01:44.524853 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 25 00:01:44.674704 sshd[5985]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:44.678119 systemd[1]: sshd@10-10.0.0.19:22-4.175.71.9:47160.service: Deactivated successfully. Apr 25 00:01:44.680496 systemd[1]: session-13.scope: Deactivated successfully. Apr 25 00:01:44.682526 systemd-logind[1702]: Session 13 logged out. Waiting for processes to exit. Apr 25 00:01:44.683721 systemd-logind[1702]: Removed session 13. Apr 25 00:01:45.313840 containerd[1719]: time="2026-04-25T00:01:45.313264581Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:45.360638 containerd[1719]: time="2026-04-25T00:01:45.360393090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 25 00:01:45.363024 containerd[1719]: time="2026-04-25T00:01:45.362989524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.995477892s" Apr 25 00:01:45.363189 containerd[1719]: time="2026-04-25T00:01:45.363026324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 25 00:01:45.364235 containerd[1719]: time="2026-04-25T00:01:45.364064837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 25 00:01:45.424630 containerd[1719]: time="2026-04-25T00:01:45.424592919Z" level=info msg="CreateContainer within sandbox \"5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 00:01:45.771832 containerd[1719]: time="2026-04-25T00:01:45.771677203Z" level=info msg="CreateContainer within sandbox \"5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5bb3ee629337fd76e69a831b1fa15e1fe010970d80ec9a008aa3f1b090f867b1\"" Apr 25 00:01:45.774762 containerd[1719]: time="2026-04-25T00:01:45.773481726Z" level=info msg="StartContainer for \"5bb3ee629337fd76e69a831b1fa15e1fe010970d80ec9a008aa3f1b090f867b1\"" Apr 25 00:01:45.829623 systemd[1]: Started cri-containerd-5bb3ee629337fd76e69a831b1fa15e1fe010970d80ec9a008aa3f1b090f867b1.scope - libcontainer container 5bb3ee629337fd76e69a831b1fa15e1fe010970d80ec9a008aa3f1b090f867b1. Apr 25 00:01:45.879038 containerd[1719]: time="2026-04-25T00:01:45.878973489Z" level=info msg="StartContainer for \"5bb3ee629337fd76e69a831b1fa15e1fe010970d80ec9a008aa3f1b090f867b1\" returns successfully" Apr 25 00:01:46.837317 kubelet[3298]: I0425 00:01:46.837235 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-75c69f45f-dgnpd" podStartSLOduration=85.236344387 podStartE2EDuration="1m41.837218466s" podCreationTimestamp="2026-04-25 00:00:05 +0000 UTC" firstStartedPulling="2026-04-25 00:01:28.763007856 +0000 UTC m=+99.876597534" lastFinishedPulling="2026-04-25 00:01:45.363881935 +0000 UTC m=+116.477471613" observedRunningTime="2026-04-25 00:01:46.441739158 +0000 UTC m=+117.555328836" watchObservedRunningTime="2026-04-25 00:01:46.837218466 +0000 UTC m=+117.950808044" Apr 25 00:01:49.005282 containerd[1719]: time="2026-04-25T00:01:49.005234884Z" level=info msg="StopPodSandbox for \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\"" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.037 [WARNING][6072] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7f66e989-db5b-4adc-97a4-8927e62e5b7f", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40", Pod:"coredns-66bc5c9577-x7pfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1712536dbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.037 [INFO][6072] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.037 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" iface="eth0" netns="" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.037 [INFO][6072] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.037 [INFO][6072] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.061 [INFO][6080] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.061 [INFO][6080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.061 [INFO][6080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.067 [WARNING][6080] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.067 [INFO][6080] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.070 [INFO][6080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:49.072512 containerd[1719]: 2026-04-25 00:01:49.071 [INFO][6072] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.073358 containerd[1719]: time="2026-04-25T00:01:49.072559261Z" level=info msg="TearDown network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\" successfully" Apr 25 00:01:49.073358 containerd[1719]: time="2026-04-25T00:01:49.072592762Z" level=info msg="StopPodSandbox for \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\" returns successfully" Apr 25 00:01:49.074015 containerd[1719]: time="2026-04-25T00:01:49.073800878Z" level=info msg="RemovePodSandbox for \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\"" Apr 25 00:01:49.074015 containerd[1719]: time="2026-04-25T00:01:49.073844678Z" level=info msg="Forcibly stopping sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\"" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.112 [WARNING][6094] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7f66e989-db5b-4adc-97a4-8927e62e5b7f", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"d38cf816f312798c8c1b9a854e44e1001159ed401ad8675aeceb4be21820fc40", Pod:"coredns-66bc5c9577-x7pfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1712536dbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.112 [INFO][6094] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.112 [INFO][6094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" iface="eth0" netns="" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.112 [INFO][6094] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.112 [INFO][6094] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.134 [INFO][6101] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.134 [INFO][6101] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.134 [INFO][6101] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.140 [WARNING][6101] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.140 [INFO][6101] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" HandleID="k8s-pod-network.6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--x7pfz-eth0" Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.141 [INFO][6101] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:49.143702 containerd[1719]: 2026-04-25 00:01:49.142 [INFO][6094] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96" Apr 25 00:01:49.144658 containerd[1719]: time="2026-04-25T00:01:49.143730789Z" level=info msg="TearDown network for sandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\" successfully" Apr 25 00:01:49.700930 systemd[1]: Started sshd@11-10.0.0.19:22-4.175.71.9:57928.service - OpenSSH per-connection server daemon (4.175.71.9:57928). Apr 25 00:01:49.821099 sshd[6108]: Accepted publickey for core from 4.175.71.9 port 57928 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:49.822550 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:49.826787 systemd-logind[1702]: New session 14 of user core. Apr 25 00:01:49.834582 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 25 00:01:49.991026 sshd[6108]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:49.995070 systemd[1]: sshd@11-10.0.0.19:22-4.175.71.9:57928.service: Deactivated successfully. Apr 25 00:01:49.997566 systemd[1]: session-14.scope: Deactivated successfully. Apr 25 00:01:49.998874 systemd-logind[1702]: Session 14 logged out. Waiting for processes to exit. Apr 25 00:01:50.000194 systemd-logind[1702]: Removed session 14. Apr 25 00:01:50.843958 containerd[1719]: time="2026-04-25T00:01:50.843541542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:50.843958 containerd[1719]: time="2026-04-25T00:01:50.843633243Z" level=info msg="RemovePodSandbox \"6f141586b7847bcf9d9a0030aa330daa3eb2f92cdc316f15f9ae7f67c1897d96\" returns successfully" Apr 25 00:01:50.847040 containerd[1719]: time="2026-04-25T00:01:50.846477080Z" level=info msg="StopPodSandbox for \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\"" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.903 [WARNING][6133] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6f9f367b-3d37-4263-8e44-7b68c95eb032", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9", Pod:"calico-apiserver-75c69f45f-dgnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65bb6a73d1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.903 [INFO][6133] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.903 [INFO][6133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" iface="eth0" netns="" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.903 [INFO][6133] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.903 [INFO][6133] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.942 [INFO][6140] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.942 [INFO][6140] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.943 [INFO][6140] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.951 [WARNING][6140] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.951 [INFO][6140] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.953 [INFO][6140] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:50.956768 containerd[1719]: 2026-04-25 00:01:50.955 [INFO][6133] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:50.957372 containerd[1719]: time="2026-04-25T00:01:50.956809518Z" level=info msg="TearDown network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\" successfully" Apr 25 00:01:50.957372 containerd[1719]: time="2026-04-25T00:01:50.956839718Z" level=info msg="StopPodSandbox for \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\" returns successfully" Apr 25 00:01:50.957372 containerd[1719]: time="2026-04-25T00:01:50.957332925Z" level=info msg="RemovePodSandbox for \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\"" Apr 25 00:01:50.957372 containerd[1719]: time="2026-04-25T00:01:50.957365025Z" level=info msg="Forcibly stopping sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\"" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.018 [WARNING][6156] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6f9f367b-3d37-4263-8e44-7b68c95eb032", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"5197a861a65906a86785789a8f5e55f3ad26b372b827b31d7a8b357035f250e9", Pod:"calico-apiserver-75c69f45f-dgnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali65bb6a73d1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.018 [INFO][6156] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.018 [INFO][6156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" iface="eth0" netns="" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.018 [INFO][6156] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.018 [INFO][6156] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.050 [INFO][6164] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.051 [INFO][6164] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.051 [INFO][6164] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.059 [WARNING][6164] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.059 [INFO][6164] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" HandleID="k8s-pod-network.72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--dgnpd-eth0" Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.061 [INFO][6164] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.068898 containerd[1719]: 2026-04-25 00:01:51.065 [INFO][6156] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64" Apr 25 00:01:51.068898 containerd[1719]: time="2026-04-25T00:01:51.067781064Z" level=info msg="TearDown network for sandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\" successfully" Apr 25 00:01:51.080423 containerd[1719]: time="2026-04-25T00:01:51.080023324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:51.080423 containerd[1719]: time="2026-04-25T00:01:51.080118225Z" level=info msg="RemovePodSandbox \"72f9793d50ed3b49a6874f81ac4b630d3945f3160ce4a90be45ae73a1ddb5b64\" returns successfully" Apr 25 00:01:51.081436 containerd[1719]: time="2026-04-25T00:01:51.081160639Z" level=info msg="StopPodSandbox for \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\"" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.137 [WARNING][6179] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6be2f8f5-c613-46db-be5c-7773acd8f189", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9", Pod:"calico-apiserver-75c69f45f-gpm6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7637deaf6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.137 [INFO][6179] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.137 [INFO][6179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" iface="eth0" netns="" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.137 [INFO][6179] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.137 [INFO][6179] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.175 [INFO][6187] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.175 [INFO][6187] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.175 [INFO][6187] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.183 [WARNING][6187] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.183 [INFO][6187] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.186 [INFO][6187] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.195478 containerd[1719]: 2026-04-25 00:01:51.190 [INFO][6179] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.196322 containerd[1719]: time="2026-04-25T00:01:51.196283639Z" level=info msg="TearDown network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\" successfully" Apr 25 00:01:51.198215 containerd[1719]: time="2026-04-25T00:01:51.196452541Z" level=info msg="StopPodSandbox for \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\" returns successfully" Apr 25 00:01:51.198215 containerd[1719]: time="2026-04-25T00:01:51.198064662Z" level=info msg="RemovePodSandbox for \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\"" Apr 25 00:01:51.198215 containerd[1719]: time="2026-04-25T00:01:51.198101463Z" level=info msg="Forcibly stopping sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\"" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.251 [WARNING][6202] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0", GenerateName:"calico-apiserver-75c69f45f-", Namespace:"calico-system", SelfLink:"", UID:"6be2f8f5-c613-46db-be5c-7773acd8f189", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75c69f45f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"0cfb63d059306866248fecac4fd09c3558c3ba5a2f8fbbbafcd94908daba2ea9", Pod:"calico-apiserver-75c69f45f-gpm6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7637deaf6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.251 [INFO][6202] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.251 [INFO][6202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" iface="eth0" netns="" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.251 [INFO][6202] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.251 [INFO][6202] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.287 [INFO][6209] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.287 [INFO][6209] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.287 [INFO][6209] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.295 [WARNING][6209] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.295 [INFO][6209] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" HandleID="k8s-pod-network.11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--apiserver--75c69f45f--gpm6f-eth0" Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.298 [INFO][6209] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.302505 containerd[1719]: 2026-04-25 00:01:51.300 [INFO][6202] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b" Apr 25 00:01:51.303143 containerd[1719]: time="2026-04-25T00:01:51.302562624Z" level=info msg="TearDown network for sandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\" successfully" Apr 25 00:01:51.311940 containerd[1719]: time="2026-04-25T00:01:51.311897746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:51.312093 containerd[1719]: time="2026-04-25T00:01:51.311976547Z" level=info msg="RemovePodSandbox \"11fd6a2613fa6671548c2552d3593f9685beb4da1b78a038dacebeeed3b6c27b\" returns successfully" Apr 25 00:01:51.312582 containerd[1719]: time="2026-04-25T00:01:51.312551954Z" level=info msg="StopPodSandbox for \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\"" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.362 [WARNING][6223] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"475a688c-c33d-44aa-8f1e-2084fa8ba675", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392", Pod:"coredns-66bc5c9577-xrrd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e30e995580", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.362 [INFO][6223] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.362 [INFO][6223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" iface="eth0" netns="" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.362 [INFO][6223] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.362 [INFO][6223] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.393 [INFO][6230] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.394 [INFO][6230] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.394 [INFO][6230] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.405 [WARNING][6230] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.405 [INFO][6230] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.414 [INFO][6230] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.419792 containerd[1719]: 2026-04-25 00:01:51.416 [INFO][6223] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.419792 containerd[1719]: time="2026-04-25T00:01:51.419531449Z" level=info msg="TearDown network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\" successfully" Apr 25 00:01:51.419792 containerd[1719]: time="2026-04-25T00:01:51.419562649Z" level=info msg="StopPodSandbox for \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\" returns successfully" Apr 25 00:01:51.421464 containerd[1719]: time="2026-04-25T00:01:51.421232371Z" level=info msg="RemovePodSandbox for \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\"" Apr 25 00:01:51.421464 containerd[1719]: time="2026-04-25T00:01:51.421268371Z" level=info msg="Forcibly stopping sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\"" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.497 [WARNING][6245] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"475a688c-c33d-44aa-8f1e-2084fa8ba675", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"60cb9cd7ed741fa574105e5cd8ae0066f9d89096260855ee64ca18bccad08392", Pod:"coredns-66bc5c9577-xrrd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e30e995580", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.497 [INFO][6245] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.497 [INFO][6245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" iface="eth0" netns="" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.498 [INFO][6245] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.498 [INFO][6245] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.538 [INFO][6253] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.538 [INFO][6253] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.538 [INFO][6253] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.548 [WARNING][6253] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.548 [INFO][6253] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" HandleID="k8s-pod-network.60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Workload="ci--4081.3.6--n--3087b9d021-k8s-coredns--66bc5c9577--xrrd8-eth0" Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.554 [INFO][6253] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.558890 containerd[1719]: 2026-04-25 00:01:51.556 [INFO][6245] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2" Apr 25 00:01:51.560623 containerd[1719]: time="2026-04-25T00:01:51.558979966Z" level=info msg="TearDown network for sandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\" successfully" Apr 25 00:01:51.568817 containerd[1719]: time="2026-04-25T00:01:51.568767793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:51.568914 containerd[1719]: time="2026-04-25T00:01:51.568866795Z" level=info msg="RemovePodSandbox \"60dd647c70f5c9dc9e36eff3c460b686d538e6bdafe13cb4eef6e72552c4a2a2\" returns successfully" Apr 25 00:01:51.569638 containerd[1719]: time="2026-04-25T00:01:51.569579004Z" level=info msg="StopPodSandbox for \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\"" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.633 [WARNING][6268] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d30beaa3-b48e-4165-93c6-fa00a976739a", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0", Pod:"csi-node-driver-tz7wk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid77e43ac0d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.634 [INFO][6268] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.634 [INFO][6268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" iface="eth0" netns="" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.634 [INFO][6268] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.634 [INFO][6268] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.673 [INFO][6275] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.673 [INFO][6275] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.673 [INFO][6275] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.683 [WARNING][6275] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.683 [INFO][6275] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.685 [INFO][6275] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.688800 containerd[1719]: 2026-04-25 00:01:51.686 [INFO][6268] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.690554 containerd[1719]: time="2026-04-25T00:01:51.690041174Z" level=info msg="TearDown network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\" successfully" Apr 25 00:01:51.690554 containerd[1719]: time="2026-04-25T00:01:51.690079774Z" level=info msg="StopPodSandbox for \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\" returns successfully" Apr 25 00:01:51.691343 containerd[1719]: time="2026-04-25T00:01:51.690996886Z" level=info msg="RemovePodSandbox for \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\"" Apr 25 00:01:51.691343 containerd[1719]: time="2026-04-25T00:01:51.691035187Z" level=info msg="Forcibly stopping sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\"" Apr 25 00:01:51.737876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590172241.mount: Deactivated successfully. Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.747 [WARNING][6290] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d30beaa3-b48e-4165-93c6-fa00a976739a", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0", Pod:"csi-node-driver-tz7wk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid77e43ac0d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.748 [INFO][6290] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.748 [INFO][6290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" iface="eth0" netns="" Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.748 [INFO][6290] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.748 [INFO][6290] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.772 [INFO][6298] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.772 [INFO][6298] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.773 [INFO][6298] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.783 [WARNING][6298] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.784 [INFO][6298] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" HandleID="k8s-pod-network.8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Workload="ci--4081.3.6--n--3087b9d021-k8s-csi--node--driver--tz7wk-eth0" Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.786 [INFO][6298] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.792169 containerd[1719]: 2026-04-25 00:01:51.789 [INFO][6290] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d" Apr 25 00:01:51.793035 containerd[1719]: time="2026-04-25T00:01:51.792806913Z" level=info msg="TearDown network for sandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\" successfully" Apr 25 00:01:51.804365 containerd[1719]: time="2026-04-25T00:01:51.804010659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:51.804365 containerd[1719]: time="2026-04-25T00:01:51.804198162Z" level=info msg="RemovePodSandbox \"8444c5e36cd57414e8cde0225a5fb03e328caa9682336f015d4cfce61e443d2d\" returns successfully" Apr 25 00:01:51.806102 containerd[1719]: time="2026-04-25T00:01:51.805914184Z" level=info msg="StopPodSandbox for \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\"" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.863 [WARNING][6317] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0", GenerateName:"calico-kube-controllers-7bd5cc46d5-", Namespace:"calico-system", SelfLink:"", UID:"1e5e98eb-3874-4565-91da-8cf70bd88eeb", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd5cc46d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189", Pod:"calico-kube-controllers-7bd5cc46d5-99qj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali332dd8f895c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.863 [INFO][6317] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.863 [INFO][6317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" iface="eth0" netns="" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.863 [INFO][6317] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.863 [INFO][6317] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.899 [INFO][6324] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.899 [INFO][6324] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.899 [INFO][6324] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.912 [WARNING][6324] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.913 [INFO][6324] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.917 [INFO][6324] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:51.927792 containerd[1719]: 2026-04-25 00:01:51.923 [INFO][6317] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:51.930455 containerd[1719]: time="2026-04-25T00:01:51.927836373Z" level=info msg="TearDown network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\" successfully" Apr 25 00:01:51.930455 containerd[1719]: time="2026-04-25T00:01:51.927867173Z" level=info msg="StopPodSandbox for \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\" returns successfully" Apr 25 00:01:51.930455 containerd[1719]: time="2026-04-25T00:01:51.928531382Z" level=info msg="RemovePodSandbox for \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\"" Apr 25 00:01:51.930455 containerd[1719]: time="2026-04-25T00:01:51.928567383Z" level=info msg="Forcibly stopping sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\"" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:51.989 [WARNING][6338] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0", GenerateName:"calico-kube-controllers-7bd5cc46d5-", Namespace:"calico-system", SelfLink:"", UID:"1e5e98eb-3874-4565-91da-8cf70bd88eeb", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd5cc46d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"42a1731414ae6568b5f299bca592b9f97c2c7fb4d90821bb680f701e3c18b189", Pod:"calico-kube-controllers-7bd5cc46d5-99qj9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali332dd8f895c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:51.989 [INFO][6338] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:51.989 [INFO][6338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" iface="eth0" netns="" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:51.989 [INFO][6338] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:51.989 [INFO][6338] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:52.019 [INFO][6345] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:52.019 [INFO][6345] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:52.019 [INFO][6345] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:52.026 [WARNING][6345] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:52.026 [INFO][6345] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" HandleID="k8s-pod-network.d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Workload="ci--4081.3.6--n--3087b9d021-k8s-calico--kube--controllers--7bd5cc46d5--99qj9-eth0" Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:52.029 [INFO][6345] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:52.036744 containerd[1719]: 2026-04-25 00:01:52.032 [INFO][6338] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d" Apr 25 00:01:52.036744 containerd[1719]: time="2026-04-25T00:01:52.036116384Z" level=info msg="TearDown network for sandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\" successfully" Apr 25 00:01:52.046654 containerd[1719]: time="2026-04-25T00:01:52.046542920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:52.046769 containerd[1719]: time="2026-04-25T00:01:52.046700222Z" level=info msg="RemovePodSandbox \"d4dd13286fee0178396b57cfb071ef6dc7bca3dd0abc971c52fa8628f9ee918d\" returns successfully" Apr 25 00:01:52.047754 containerd[1719]: time="2026-04-25T00:01:52.047712435Z" level=info msg="StopPodSandbox for \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\"" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.127 [WARNING][6359] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8bd4aa88-033e-41a0-bd2f-6042f3bf091b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9", Pod:"goldmane-cccfbd5cf-7ctl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia340b6e8035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.127 [INFO][6359] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.128 [INFO][6359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" iface="eth0" netns="" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.128 [INFO][6359] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.128 [INFO][6359] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.160 [INFO][6366] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.161 [INFO][6366] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.161 [INFO][6366] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.169 [WARNING][6366] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.169 [INFO][6366] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.170 [INFO][6366] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:52.177194 containerd[1719]: 2026-04-25 00:01:52.173 [INFO][6359] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.178467 containerd[1719]: time="2026-04-25T00:01:52.177361525Z" level=info msg="TearDown network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\" successfully" Apr 25 00:01:52.178467 containerd[1719]: time="2026-04-25T00:01:52.177392825Z" level=info msg="StopPodSandbox for \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\" returns successfully" Apr 25 00:01:52.178951 containerd[1719]: time="2026-04-25T00:01:52.178633742Z" level=info msg="RemovePodSandbox for \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\"" Apr 25 00:01:52.178951 containerd[1719]: time="2026-04-25T00:01:52.178669142Z" level=info msg="Forcibly stopping sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\"" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.230 [WARNING][6381] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8bd4aa88-033e-41a0-bd2f-6042f3bf091b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 0, 0, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-3087b9d021", ContainerID:"2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9", Pod:"goldmane-cccfbd5cf-7ctl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia340b6e8035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.230 [INFO][6381] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.230 [INFO][6381] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" iface="eth0" netns="" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.230 [INFO][6381] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.230 [INFO][6381] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.267 [INFO][6389] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.267 [INFO][6389] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.267 [INFO][6389] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.278 [WARNING][6389] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.278 [INFO][6389] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" HandleID="k8s-pod-network.4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Workload="ci--4081.3.6--n--3087b9d021-k8s-goldmane--cccfbd5cf--7ctl7-eth0" Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.280 [INFO][6389] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:52.284674 containerd[1719]: 2026-04-25 00:01:52.282 [INFO][6381] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b" Apr 25 00:01:52.285820 containerd[1719]: time="2026-04-25T00:01:52.284963227Z" level=info msg="TearDown network for sandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\" successfully" Apr 25 00:01:52.295312 containerd[1719]: time="2026-04-25T00:01:52.294836356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:52.295312 containerd[1719]: time="2026-04-25T00:01:52.295093259Z" level=info msg="RemovePodSandbox \"4eb4fe5da499aff1b60a64eca8dc7b3f10a433c5cbfe61ecd886b9d30898f75b\" returns successfully" Apr 25 00:01:52.297244 containerd[1719]: time="2026-04-25T00:01:52.297211287Z" level=info msg="StopPodSandbox for \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\"" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.345 [WARNING][6403] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.346 [INFO][6403] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.346 [INFO][6403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" iface="eth0" netns="" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.346 [INFO][6403] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.346 [INFO][6403] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.387 [INFO][6410] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.387 [INFO][6410] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.387 [INFO][6410] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.397 [WARNING][6410] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.397 [INFO][6410] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.399 [INFO][6410] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:52.403770 containerd[1719]: 2026-04-25 00:01:52.401 [INFO][6403] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.404834 containerd[1719]: time="2026-04-25T00:01:52.403805676Z" level=info msg="TearDown network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\" successfully" Apr 25 00:01:52.404834 containerd[1719]: time="2026-04-25T00:01:52.403837777Z" level=info msg="StopPodSandbox for \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\" returns successfully" Apr 25 00:01:52.404834 containerd[1719]: time="2026-04-25T00:01:52.404478685Z" level=info msg="RemovePodSandbox for \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\"" Apr 25 00:01:52.404834 containerd[1719]: time="2026-04-25T00:01:52.404774189Z" level=info msg="Forcibly stopping sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\"" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.466 [WARNING][6424] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" WorkloadEndpoint="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.466 [INFO][6424] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.466 [INFO][6424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" iface="eth0" netns="" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.466 [INFO][6424] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.466 [INFO][6424] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.501 [INFO][6431] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.501 [INFO][6431] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.501 [INFO][6431] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.509 [WARNING][6431] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.509 [INFO][6431] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" HandleID="k8s-pod-network.bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Workload="ci--4081.3.6--n--3087b9d021-k8s-whisker--5689597f97--ghq8j-eth0" Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.511 [INFO][6431] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 00:01:52.515839 containerd[1719]: 2026-04-25 00:01:52.513 [INFO][6424] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb" Apr 25 00:01:52.516989 containerd[1719]: time="2026-04-25T00:01:52.516638047Z" level=info msg="TearDown network for sandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\" successfully" Apr 25 00:01:52.526083 containerd[1719]: time="2026-04-25T00:01:52.526048169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:52.526253 containerd[1719]: time="2026-04-25T00:01:52.526217271Z" level=info msg="RemovePodSandbox \"bf51ea73752c298a5a86e6d3d026e58d0f18fcbee956b7689332493a033918eb\" returns successfully" Apr 25 00:01:52.574960 containerd[1719]: time="2026-04-25T00:01:52.574864205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:52.578254 containerd[1719]: time="2026-04-25T00:01:52.578185249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 25 00:01:52.581641 containerd[1719]: time="2026-04-25T00:01:52.581549893Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:52.586828 containerd[1719]: time="2026-04-25T00:01:52.586642859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:52.587813 containerd[1719]: time="2026-04-25T00:01:52.587688273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 7.223588934s" Apr 25 00:01:52.587813 containerd[1719]: time="2026-04-25T00:01:52.587721473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 25 00:01:52.589114 containerd[1719]: time="2026-04-25T00:01:52.589024490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 25 00:01:52.597956 containerd[1719]: time="2026-04-25T00:01:52.597871405Z" level=info msg="CreateContainer within sandbox \"2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 25 00:01:52.629236 containerd[1719]: time="2026-04-25T00:01:52.629197314Z" level=info msg="CreateContainer within sandbox \"2a8b776accd3dcec18852d70815fe38963bb3f3b6a1ac519547bd7effd9cceb9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6fff89793d3927f99641eadd8a9e1d6fafac806d013781c404ce07e49a853748\"" Apr 25 00:01:52.630965 containerd[1719]: time="2026-04-25T00:01:52.629753421Z" level=info msg="StartContainer for \"6fff89793d3927f99641eadd8a9e1d6fafac806d013781c404ce07e49a853748\"" Apr 25 00:01:52.668876 systemd[1]: Started cri-containerd-6fff89793d3927f99641eadd8a9e1d6fafac806d013781c404ce07e49a853748.scope - libcontainer container 6fff89793d3927f99641eadd8a9e1d6fafac806d013781c404ce07e49a853748. Apr 25 00:01:52.712212 containerd[1719]: time="2026-04-25T00:01:52.712133194Z" level=info msg="StartContainer for \"6fff89793d3927f99641eadd8a9e1d6fafac806d013781c404ce07e49a853748\" returns successfully" Apr 25 00:01:53.975289 containerd[1719]: time="2026-04-25T00:01:53.975232756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:54.020361 containerd[1719]: time="2026-04-25T00:01:54.020280443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 25 00:01:54.069163 containerd[1719]: time="2026-04-25T00:01:54.069075379Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:54.113768 containerd[1719]: time="2026-04-25T00:01:54.113705161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:54.114748 containerd[1719]: time="2026-04-25T00:01:54.114480271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.52542288s" Apr 25 00:01:54.114748 containerd[1719]: time="2026-04-25T00:01:54.114522771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 25 00:01:54.116255 containerd[1719]: time="2026-04-25T00:01:54.116050091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 25 00:01:54.165200 containerd[1719]: time="2026-04-25T00:01:54.165158531Z" level=info msg="CreateContainer within sandbox \"4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 25 00:01:54.503723 systemd[1]: run-containerd-runc-k8s.io-6fff89793d3927f99641eadd8a9e1d6fafac806d013781c404ce07e49a853748-runc.9kgmdw.mount: Deactivated successfully. Apr 25 00:01:54.511857 containerd[1719]: time="2026-04-25T00:01:54.511811849Z" level=info msg="CreateContainer within sandbox \"4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"58c8294b611a55241c44c9e60f4dbfd750a6d36ac464f428d2a556b4a6f69d89\"" Apr 25 00:01:54.514925 containerd[1719]: time="2026-04-25T00:01:54.513522871Z" level=info msg="StartContainer for \"58c8294b611a55241c44c9e60f4dbfd750a6d36ac464f428d2a556b4a6f69d89\"" Apr 25 00:01:54.562585 systemd[1]: Started cri-containerd-58c8294b611a55241c44c9e60f4dbfd750a6d36ac464f428d2a556b4a6f69d89.scope - libcontainer container 58c8294b611a55241c44c9e60f4dbfd750a6d36ac464f428d2a556b4a6f69d89. Apr 25 00:01:54.629102 containerd[1719]: time="2026-04-25T00:01:54.628971176Z" level=info msg="StartContainer for \"58c8294b611a55241c44c9e60f4dbfd750a6d36ac464f428d2a556b4a6f69d89\" returns successfully" Apr 25 00:01:55.019735 systemd[1]: Started sshd@12-10.0.0.19:22-4.175.71.9:57940.service - OpenSSH per-connection server daemon (4.175.71.9:57940). Apr 25 00:01:55.129975 sshd[6591]: Accepted publickey for core from 4.175.71.9 port 57940 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:55.131627 sshd[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:55.136283 systemd-logind[1702]: New session 15 of user core. Apr 25 00:01:55.141806 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 25 00:01:55.309615 sshd[6591]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:55.316317 systemd[1]: sshd@12-10.0.0.19:22-4.175.71.9:57940.service: Deactivated successfully. Apr 25 00:01:55.319928 systemd[1]: session-15.scope: Deactivated successfully. Apr 25 00:01:55.321275 systemd-logind[1702]: Session 15 logged out. Waiting for processes to exit. Apr 25 00:01:55.336692 systemd[1]: Started sshd@13-10.0.0.19:22-4.175.71.9:32816.service - OpenSSH per-connection server daemon (4.175.71.9:32816). Apr 25 00:01:55.337991 systemd-logind[1702]: Removed session 15. Apr 25 00:01:55.451954 sshd[6611]: Accepted publickey for core from 4.175.71.9 port 32816 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:55.453149 sshd[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:55.462938 systemd-logind[1702]: New session 16 of user core. Apr 25 00:01:55.469622 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 25 00:01:55.658665 sshd[6611]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:55.665011 systemd[1]: sshd@13-10.0.0.19:22-4.175.71.9:32816.service: Deactivated successfully. Apr 25 00:01:55.672138 systemd[1]: session-16.scope: Deactivated successfully. Apr 25 00:01:55.676597 systemd-logind[1702]: Session 16 logged out. Waiting for processes to exit. Apr 25 00:01:55.693664 systemd[1]: Started sshd@14-10.0.0.19:22-4.175.71.9:32826.service - OpenSSH per-connection server daemon (4.175.71.9:32826). Apr 25 00:01:55.694953 systemd-logind[1702]: Removed session 16. Apr 25 00:01:55.815320 sshd[6635]: Accepted publickey for core from 4.175.71.9 port 32826 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:01:55.815932 sshd[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:55.820120 systemd-logind[1702]: New session 17 of user core. Apr 25 00:01:55.826556 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 25 00:01:55.979027 sshd[6635]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:55.983463 systemd[1]: sshd@14-10.0.0.19:22-4.175.71.9:32826.service: Deactivated successfully. Apr 25 00:01:55.985753 systemd[1]: session-17.scope: Deactivated successfully. Apr 25 00:01:55.986974 systemd-logind[1702]: Session 17 logged out. Waiting for processes to exit. Apr 25 00:01:55.987978 systemd-logind[1702]: Removed session 17. Apr 25 00:01:59.159686 containerd[1719]: time="2026-04-25T00:01:59.159626145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:59.223222 containerd[1719]: time="2026-04-25T00:01:59.222799365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 25 00:01:59.269512 containerd[1719]: time="2026-04-25T00:01:59.269429770Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:59.317741 containerd[1719]: time="2026-04-25T00:01:59.317699496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:01:59.319035 containerd[1719]: time="2026-04-25T00:01:59.318837511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 5.20274852s" Apr 25 00:01:59.319035 containerd[1719]: time="2026-04-25T00:01:59.318874611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 25 00:01:59.324495 containerd[1719]: time="2026-04-25T00:01:59.323572672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 25 00:01:59.366811 containerd[1719]: time="2026-04-25T00:01:59.366776333Z" level=info msg="CreateContainer within sandbox \"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 25 00:01:59.666584 containerd[1719]: time="2026-04-25T00:01:59.666546723Z" level=info msg="CreateContainer within sandbox \"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"38c4f1831d153b4e28264fb2ff4f54711c807470f2e3be498952815d4d482d2c\"" Apr 25 00:01:59.668693 containerd[1719]: time="2026-04-25T00:01:59.667245632Z" level=info msg="StartContainer for \"38c4f1831d153b4e28264fb2ff4f54711c807470f2e3be498952815d4d482d2c\"" Apr 25 00:01:59.705652 systemd[1]: Started cri-containerd-38c4f1831d153b4e28264fb2ff4f54711c807470f2e3be498952815d4d482d2c.scope - libcontainer container 38c4f1831d153b4e28264fb2ff4f54711c807470f2e3be498952815d4d482d2c. Apr 25 00:01:59.765063 containerd[1719]: time="2026-04-25T00:01:59.764989100Z" level=info msg="StartContainer for \"38c4f1831d153b4e28264fb2ff4f54711c807470f2e3be498952815d4d482d2c\" returns successfully" Apr 25 00:02:01.005688 systemd[1]: Started sshd@15-10.0.0.19:22-4.175.71.9:32830.service - OpenSSH per-connection server daemon (4.175.71.9:32830). Apr 25 00:02:01.129160 sshd[6687]: Accepted publickey for core from 4.175.71.9 port 32830 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:01.130748 sshd[6687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:01.136074 systemd-logind[1702]: New session 18 of user core. Apr 25 00:02:01.141557 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 25 00:02:01.294662 sshd[6687]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:01.298648 systemd-logind[1702]: Session 18 logged out. Waiting for processes to exit. Apr 25 00:02:01.299451 systemd[1]: sshd@15-10.0.0.19:22-4.175.71.9:32830.service: Deactivated successfully. Apr 25 00:02:01.301842 systemd[1]: session-18.scope: Deactivated successfully. Apr 25 00:02:01.302959 systemd-logind[1702]: Removed session 18. Apr 25 00:02:01.322692 systemd[1]: Started sshd@16-10.0.0.19:22-4.175.71.9:32836.service - OpenSSH per-connection server daemon (4.175.71.9:32836). Apr 25 00:02:01.431965 sshd[6700]: Accepted publickey for core from 4.175.71.9 port 32836 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:01.433399 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:01.437640 systemd-logind[1702]: New session 19 of user core. Apr 25 00:02:01.447706 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 25 00:02:01.667615 sshd[6700]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:01.671604 systemd[1]: sshd@16-10.0.0.19:22-4.175.71.9:32836.service: Deactivated successfully. Apr 25 00:02:01.675125 systemd[1]: session-19.scope: Deactivated successfully. Apr 25 00:02:01.676660 systemd-logind[1702]: Session 19 logged out. Waiting for processes to exit. Apr 25 00:02:01.678148 systemd-logind[1702]: Removed session 19. Apr 25 00:02:01.705214 systemd[1]: Started sshd@17-10.0.0.19:22-4.175.71.9:32844.service - OpenSSH per-connection server daemon (4.175.71.9:32844). Apr 25 00:02:01.814294 sshd[6711]: Accepted publickey for core from 4.175.71.9 port 32844 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:01.816287 sshd[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:01.820885 systemd-logind[1702]: New session 20 of user core. Apr 25 00:02:01.826549 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 25 00:02:02.830179 sshd[6711]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:02.837880 systemd-logind[1702]: Session 20 logged out. Waiting for processes to exit. Apr 25 00:02:02.838263 systemd[1]: sshd@17-10.0.0.19:22-4.175.71.9:32844.service: Deactivated successfully. Apr 25 00:02:02.842776 systemd[1]: session-20.scope: Deactivated successfully. Apr 25 00:02:02.862650 systemd[1]: Started sshd@18-10.0.0.19:22-4.175.71.9:32856.service - OpenSSH per-connection server daemon (4.175.71.9:32856). Apr 25 00:02:02.864076 systemd-logind[1702]: Removed session 20. Apr 25 00:02:02.982967 sshd[6740]: Accepted publickey for core from 4.175.71.9 port 32856 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:02.985696 sshd[6740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:02.997664 systemd-logind[1702]: New session 21 of user core. Apr 25 00:02:03.002591 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 25 00:02:03.257192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043067637.mount: Deactivated successfully. Apr 25 00:02:03.385646 sshd[6740]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:03.392933 systemd[1]: sshd@18-10.0.0.19:22-4.175.71.9:32856.service: Deactivated successfully. Apr 25 00:02:03.397203 systemd[1]: session-21.scope: Deactivated successfully. Apr 25 00:02:03.402485 systemd-logind[1702]: Session 21 logged out. Waiting for processes to exit. Apr 25 00:02:03.423673 systemd[1]: Started sshd@19-10.0.0.19:22-4.175.71.9:32862.service - OpenSSH per-connection server daemon (4.175.71.9:32862). Apr 25 00:02:03.429568 systemd-logind[1702]: Removed session 21. Apr 25 00:02:03.560720 sshd[6751]: Accepted publickey for core from 4.175.71.9 port 32862 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:03.562549 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:03.568633 systemd-logind[1702]: New session 22 of user core. Apr 25 00:02:03.572806 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 25 00:02:03.721754 sshd[6751]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:03.724888 systemd[1]: sshd@19-10.0.0.19:22-4.175.71.9:32862.service: Deactivated successfully. Apr 25 00:02:03.727300 systemd[1]: session-22.scope: Deactivated successfully. Apr 25 00:02:03.728823 systemd-logind[1702]: Session 22 logged out. Waiting for processes to exit. Apr 25 00:02:03.730750 systemd-logind[1702]: Removed session 22. Apr 25 00:02:03.813967 containerd[1719]: time="2026-04-25T00:02:03.813335931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:03.864332 containerd[1719]: time="2026-04-25T00:02:03.864165691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 25 00:02:03.867936 containerd[1719]: time="2026-04-25T00:02:03.867763638Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:03.968359 containerd[1719]: time="2026-04-25T00:02:03.968274142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:03.969378 containerd[1719]: time="2026-04-25T00:02:03.969119353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 4.64548598s" Apr 25 00:02:03.969378 containerd[1719]: time="2026-04-25T00:02:03.969164253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 25 00:02:03.971100 containerd[1719]: time="2026-04-25T00:02:03.970842275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 25 00:02:04.017938 containerd[1719]: time="2026-04-25T00:02:04.017893786Z" level=info msg="CreateContainer within sandbox \"4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 25 00:02:04.192762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859338873.mount: Deactivated successfully. Apr 25 00:02:04.318875 containerd[1719]: time="2026-04-25T00:02:04.318817778Z" level=info msg="CreateContainer within sandbox \"4b5d955b48dc5a534dc30feca1d4d368882a1e98217bb50c216bdf957ec49887\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d6327a176a899e05d24dd049fe5dd85a7d7fd818226c6ed2683a14c7c2d86c07\"" Apr 25 00:02:04.320006 containerd[1719]: time="2026-04-25T00:02:04.319732590Z" level=info msg="StartContainer for \"d6327a176a899e05d24dd049fe5dd85a7d7fd818226c6ed2683a14c7c2d86c07\"" Apr 25 00:02:04.362567 systemd[1]: Started cri-containerd-d6327a176a899e05d24dd049fe5dd85a7d7fd818226c6ed2683a14c7c2d86c07.scope - libcontainer container d6327a176a899e05d24dd049fe5dd85a7d7fd818226c6ed2683a14c7c2d86c07. Apr 25 00:02:04.416750 containerd[1719]: time="2026-04-25T00:02:04.416488741Z" level=info msg="StartContainer for \"d6327a176a899e05d24dd049fe5dd85a7d7fd818226c6ed2683a14c7c2d86c07\" returns successfully" Apr 25 00:02:04.527883 kubelet[3298]: I0425 00:02:04.527482 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d654f4d94-dm7nx" podStartSLOduration=6.572989014 podStartE2EDuration="41.527447976s" podCreationTimestamp="2026-04-25 00:01:23 +0000 UTC" firstStartedPulling="2026-04-25 00:01:29.015787005 +0000 UTC m=+100.129376583" lastFinishedPulling="2026-04-25 00:02:03.970245867 +0000 UTC m=+135.083835545" observedRunningTime="2026-04-25 00:02:04.52620266 +0000 UTC m=+135.639792338" watchObservedRunningTime="2026-04-25 00:02:04.527447976 +0000 UTC m=+135.641037554" Apr 25 00:02:04.527883 kubelet[3298]: I0425 00:02:04.527728 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-7ctl7" podStartSLOduration=94.727961483 podStartE2EDuration="1m58.527715479s" podCreationTimestamp="2026-04-25 00:00:06 +0000 UTC" firstStartedPulling="2026-04-25 00:01:28.78894359 +0000 UTC m=+99.902533268" lastFinishedPulling="2026-04-25 00:01:52.588697686 +0000 UTC m=+123.702287264" observedRunningTime="2026-04-25 00:01:53.499521656 +0000 UTC m=+124.613111234" watchObservedRunningTime="2026-04-25 00:02:04.527715479 +0000 UTC m=+135.641305157" Apr 25 00:02:08.752171 systemd[1]: Started sshd@20-10.0.0.19:22-4.175.71.9:43278.service - OpenSSH per-connection server daemon (4.175.71.9:43278). Apr 25 00:02:08.869267 sshd[6851]: Accepted publickey for core from 4.175.71.9 port 43278 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:08.870787 sshd[6851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:08.875765 systemd-logind[1702]: New session 23 of user core. Apr 25 00:02:08.880578 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 25 00:02:09.037754 sshd[6851]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:09.043534 systemd[1]: sshd@20-10.0.0.19:22-4.175.71.9:43278.service: Deactivated successfully. Apr 25 00:02:09.045776 systemd[1]: session-23.scope: Deactivated successfully. Apr 25 00:02:09.046515 systemd-logind[1702]: Session 23 logged out. Waiting for processes to exit. Apr 25 00:02:09.047864 systemd-logind[1702]: Removed session 23. Apr 25 00:02:10.923719 containerd[1719]: time="2026-04-25T00:02:10.923575880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:10.926137 containerd[1719]: time="2026-04-25T00:02:10.926073112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 25 00:02:10.929193 containerd[1719]: time="2026-04-25T00:02:10.929129752Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:10.933953 containerd[1719]: time="2026-04-25T00:02:10.933898313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:02:10.935141 containerd[1719]: time="2026-04-25T00:02:10.934622923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 6.963717346s" Apr 25 00:02:10.935141 containerd[1719]: time="2026-04-25T00:02:10.934662323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 25 00:02:10.943631 containerd[1719]: time="2026-04-25T00:02:10.943602039Z" level=info msg="CreateContainer within sandbox \"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 25 00:02:10.986484 containerd[1719]: time="2026-04-25T00:02:10.986362492Z" level=info msg="CreateContainer within sandbox \"395258261a904c2d6957520e3eba51ea37e376ed0566b2bb640b5f29951e1dd0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"57f03e6e3c9fb5c9ed54105f80159058354daeaf1c8fc55528ee51849a03c7c8\"" Apr 25 00:02:10.987835 containerd[1719]: time="2026-04-25T00:02:10.987808210Z" level=info msg="StartContainer for \"57f03e6e3c9fb5c9ed54105f80159058354daeaf1c8fc55528ee51849a03c7c8\"" Apr 25 00:02:11.031576 systemd[1]: Started cri-containerd-57f03e6e3c9fb5c9ed54105f80159058354daeaf1c8fc55528ee51849a03c7c8.scope - libcontainer container 57f03e6e3c9fb5c9ed54105f80159058354daeaf1c8fc55528ee51849a03c7c8. Apr 25 00:02:11.066725 containerd[1719]: time="2026-04-25T00:02:11.066680330Z" level=info msg="StartContainer for \"57f03e6e3c9fb5c9ed54105f80159058354daeaf1c8fc55528ee51849a03c7c8\" returns successfully" Apr 25 00:02:11.148469 kubelet[3298]: I0425 00:02:11.148430 3298 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 25 00:02:11.148469 kubelet[3298]: I0425 00:02:11.148481 3298 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 25 00:02:11.547329 kubelet[3298]: I0425 00:02:11.546347 3298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tz7wk" podStartSLOduration=89.129212144 podStartE2EDuration="2m5.546323932s" podCreationTimestamp="2026-04-25 00:00:06 +0000 UTC" firstStartedPulling="2026-04-25 00:01:34.51872795 +0000 UTC m=+105.632317528" lastFinishedPulling="2026-04-25 00:02:10.935839738 +0000 UTC m=+142.049429316" observedRunningTime="2026-04-25 00:02:11.545705224 +0000 UTC m=+142.659294802" watchObservedRunningTime="2026-04-25 00:02:11.546323932 +0000 UTC m=+142.659913510" Apr 25 00:02:14.070953 systemd[1]: Started sshd@21-10.0.0.19:22-4.175.71.9:43286.service - OpenSSH per-connection server daemon (4.175.71.9:43286). Apr 25 00:02:14.181784 sshd[6906]: Accepted publickey for core from 4.175.71.9 port 43286 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:14.183249 sshd[6906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:14.187884 systemd-logind[1702]: New session 24 of user core. Apr 25 00:02:14.195574 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 25 00:02:14.344320 sshd[6906]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:14.349058 systemd-logind[1702]: Session 24 logged out. Waiting for processes to exit. Apr 25 00:02:14.349689 systemd[1]: sshd@21-10.0.0.19:22-4.175.71.9:43286.service: Deactivated successfully. Apr 25 00:02:14.352305 systemd[1]: session-24.scope: Deactivated successfully. Apr 25 00:02:14.353358 systemd-logind[1702]: Removed session 24. Apr 25 00:02:19.308814 systemd[1]: run-containerd-runc-k8s.io-6fff89793d3927f99641eadd8a9e1d6fafac806d013781c404ce07e49a853748-runc.gPyfxO.mount: Deactivated successfully. Apr 25 00:02:19.375740 systemd[1]: Started sshd@22-10.0.0.19:22-4.175.71.9:33168.service - OpenSSH per-connection server daemon (4.175.71.9:33168). Apr 25 00:02:19.496193 sshd[6964]: Accepted publickey for core from 4.175.71.9 port 33168 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:19.497931 sshd[6964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:19.502771 systemd-logind[1702]: New session 25 of user core. Apr 25 00:02:19.510845 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 25 00:02:19.657904 sshd[6964]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:19.661846 systemd[1]: sshd@22-10.0.0.19:22-4.175.71.9:33168.service: Deactivated successfully. Apr 25 00:02:19.664210 systemd[1]: session-25.scope: Deactivated successfully. Apr 25 00:02:19.665224 systemd-logind[1702]: Session 25 logged out. Waiting for processes to exit. Apr 25 00:02:19.666335 systemd-logind[1702]: Removed session 25. Apr 25 00:02:23.357284 systemd[1]: run-containerd-runc-k8s.io-e11c77085110964c45d6371f47115da668c97cbe8b61329c6a19dfbc7a40015f-runc.GbyhH7.mount: Deactivated successfully. Apr 25 00:02:24.693035 systemd[1]: Started sshd@23-10.0.0.19:22-4.175.71.9:33184.service - OpenSSH per-connection server daemon (4.175.71.9:33184). Apr 25 00:02:24.804194 sshd[7018]: Accepted publickey for core from 4.175.71.9 port 33184 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:24.805577 sshd[7018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:24.813982 systemd-logind[1702]: New session 26 of user core. Apr 25 00:02:24.818620 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 25 00:02:24.974771 sshd[7018]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:24.979336 systemd[1]: sshd@23-10.0.0.19:22-4.175.71.9:33184.service: Deactivated successfully. Apr 25 00:02:24.981596 systemd[1]: session-26.scope: Deactivated successfully. Apr 25 00:02:24.982388 systemd-logind[1702]: Session 26 logged out. Waiting for processes to exit. Apr 25 00:02:24.983529 systemd-logind[1702]: Removed session 26. Apr 25 00:02:30.008807 systemd[1]: Started sshd@24-10.0.0.19:22-4.175.71.9:60144.service - OpenSSH per-connection server daemon (4.175.71.9:60144). Apr 25 00:02:30.130447 sshd[7037]: Accepted publickey for core from 4.175.71.9 port 60144 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:30.132128 sshd[7037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:30.138688 systemd-logind[1702]: New session 27 of user core. Apr 25 00:02:30.146706 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 25 00:02:30.349691 sshd[7037]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:30.354224 systemd[1]: sshd@24-10.0.0.19:22-4.175.71.9:60144.service: Deactivated successfully. Apr 25 00:02:30.357867 systemd[1]: session-27.scope: Deactivated successfully. Apr 25 00:02:30.361195 systemd-logind[1702]: Session 27 logged out. Waiting for processes to exit. Apr 25 00:02:30.362771 systemd-logind[1702]: Removed session 27. Apr 25 00:02:35.381712 systemd[1]: Started sshd@25-10.0.0.19:22-4.175.71.9:54838.service - OpenSSH per-connection server daemon (4.175.71.9:54838). Apr 25 00:02:35.491593 sshd[7052]: Accepted publickey for core from 4.175.71.9 port 54838 ssh2: RSA SHA256:AzjC9ZaMtBtJMGCMAseQRFn5Ar2om2imdYKHIvWUgrA Apr 25 00:02:35.493111 sshd[7052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:02:35.498157 systemd-logind[1702]: New session 28 of user core. Apr 25 00:02:35.505586 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 25 00:02:35.656896 sshd[7052]: pam_unix(sshd:session): session closed for user core Apr 25 00:02:35.660944 systemd-logind[1702]: Session 28 logged out. Waiting for processes to exit. Apr 25 00:02:35.661866 systemd[1]: sshd@25-10.0.0.19:22-4.175.71.9:54838.service: Deactivated successfully. Apr 25 00:02:35.664048 systemd[1]: session-28.scope: Deactivated successfully. Apr 25 00:02:35.665141 systemd-logind[1702]: Removed session 28.