Mar 14 00:22:09.136054 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:22:09.136077 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:09.136092 kernel: BIOS-provided physical RAM map: Mar 14 00:22:09.136099 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:22:09.136104 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 14 00:22:09.136110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 14 00:22:09.136117 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 14 00:22:09.136127 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 14 00:22:09.136137 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 14 00:22:09.136143 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 14 00:22:09.136149 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 14 00:22:09.136159 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 14 00:22:09.136166 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 14 00:22:09.136172 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 14 00:22:09.136186 kernel: printk: bootconsole [earlyser0] enabled Mar 14 00:22:09.136194 kernel: NX (Execute Disable) protection: active Mar 14 00:22:09.136201 kernel: APIC: Static calls initialized Mar 14 00:22:09.136211 kernel: efi: EFI v2.7 by Microsoft Mar 14 00:22:09.136220 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 14 00:22:09.136227 kernel: SMBIOS 3.1.0 present. Mar 14 00:22:09.136234 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 14 00:22:09.136245 kernel: Hypervisor detected: Microsoft Hyper-V Mar 14 00:22:09.136253 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 14 00:22:09.136260 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 14 00:22:09.136289 kernel: Hyper-V: Nested features: 0x1e0101 Mar 14 00:22:09.136299 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 14 00:22:09.136306 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 14 00:22:09.136342 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 14 00:22:09.136358 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 14 00:22:09.136370 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 14 00:22:09.136378 kernel: tsc: Detected 2593.908 MHz processor Mar 14 00:22:09.136389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:22:09.136399 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:22:09.136408 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 14 00:22:09.136418 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:22:09.136425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:22:09.136436 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 14 00:22:09.136443 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 14 00:22:09.136450 kernel: Using GB pages for direct mapping Mar 14 00:22:09.136461 kernel: Secure boot disabled Mar 14 00:22:09.136473 kernel: ACPI: Early table checksum verification disabled Mar 14 00:22:09.136487 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 14 00:22:09.136495 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136503 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136514 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 14 00:22:09.136522 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 14 00:22:09.136555 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136562 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136575 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136605 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136612 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136620 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136630 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 14 00:22:09.136639 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 14 00:22:09.136647 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 14 00:22:09.136656 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 14 00:22:09.136667 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 14 00:22:09.136677 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 14 00:22:09.136699 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 14 00:22:09.136709 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 14 00:22:09.136719 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 14 00:22:09.136726 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:22:09.136734 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:22:09.142289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 14 00:22:09.142310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 14 00:22:09.142325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 14 00:22:09.142344 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 14 00:22:09.142358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 14 00:22:09.142371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 14 00:22:09.142385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 14 00:22:09.142398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 14 00:22:09.142412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 14 00:22:09.142426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 14 00:22:09.142440 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 14 00:22:09.142457 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 14 00:22:09.142471 kernel: Zone ranges: Mar 14 00:22:09.142485 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:22:09.142499 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:22:09.142512 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 14 00:22:09.142526 kernel: Movable zone start for each node Mar 14 00:22:09.142539 kernel: Early memory node ranges Mar 14 00:22:09.142558 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:22:09.142572 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 14 00:22:09.142589 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 14 00:22:09.142603 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 14 00:22:09.142616 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 14 00:22:09.142629 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 14 00:22:09.142644 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:22:09.142658 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:22:09.142672 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 14 00:22:09.142726 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 14 00:22:09.142740 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 14 00:22:09.142756 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 14 00:22:09.142769 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:22:09.142783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:22:09.142797 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:22:09.142811 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 14 00:22:09.142826 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:22:09.142838 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 14 00:22:09.142856 kernel: Booting paravirtualized kernel on Hyper-V Mar 14 00:22:09.142870 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:22:09.142889 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:22:09.142904 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:22:09.142918 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:22:09.142932 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:22:09.142945 kernel: Hyper-V: PV spinlocks enabled Mar 14 00:22:09.142959 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:22:09.142975 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:09.142990 kernel: random: crng init done Mar 14 00:22:09.143005 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 14 00:22:09.143018 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:22:09.143031 kernel: Fallback order for Node 0: 0 Mar 14 00:22:09.143044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 14 00:22:09.143057 kernel: Policy zone: Normal Mar 14 00:22:09.143069 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:22:09.143085 kernel: software IO TLB: area num 2. Mar 14 00:22:09.143100 kernel: Memory: 8066052K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316916K reserved, 0K cma-reserved) Mar 14 00:22:09.143120 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:22:09.143153 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:22:09.143166 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:22:09.143180 kernel: Dynamic Preempt: voluntary Mar 14 00:22:09.143197 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:22:09.143212 kernel: rcu: RCU event tracing is enabled. Mar 14 00:22:09.143227 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:22:09.143241 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:22:09.143255 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:22:09.143269 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:22:09.143286 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:22:09.143301 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:22:09.143315 kernel: Using NULL legacy PIC Mar 14 00:22:09.143329 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 14 00:22:09.143343 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:22:09.143357 kernel: Console: colour dummy device 80x25 Mar 14 00:22:09.143371 kernel: printk: console [tty1] enabled Mar 14 00:22:09.143385 kernel: printk: console [ttyS0] enabled Mar 14 00:22:09.143402 kernel: printk: bootconsole [earlyser0] disabled Mar 14 00:22:09.143416 kernel: ACPI: Core revision 20230628 Mar 14 00:22:09.143436 kernel: Failed to register legacy timer interrupt Mar 14 00:22:09.143450 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:22:09.143464 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 14 00:22:09.143478 kernel: Hyper-V: Using IPI hypercalls Mar 14 00:22:09.143492 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 14 00:22:09.143506 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 14 00:22:09.143521 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 14 00:22:09.143538 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 14 00:22:09.143552 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 14 00:22:09.143566 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 14 00:22:09.143580 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Mar 14 00:22:09.143594 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 14 00:22:09.143608 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 14 00:22:09.143622 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:22:09.143636 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:22:09.143650 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:22:09.143664 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 14 00:22:09.143724 kernel: RETBleed: Vulnerable Mar 14 00:22:09.143738 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:22:09.143753 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:09.143767 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:09.143781 kernel: active return thunk: its_return_thunk Mar 14 00:22:09.143795 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:22:09.143809 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:22:09.143823 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:22:09.143837 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:22:09.143851 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:22:09.143869 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:22:09.143883 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:22:09.143897 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:22:09.143910 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 14 00:22:09.143924 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 14 00:22:09.143938 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 14 00:22:09.143952 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 14 00:22:09.143966 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:22:09.143986 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:22:09.144000 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:22:09.144014 kernel: landlock: Up and running. Mar 14 00:22:09.144027 kernel: SELinux: Initializing. Mar 14 00:22:09.144044 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.144058 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.144072 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 14 00:22:09.144086 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:09.144101 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:09.144115 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:09.144129 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 14 00:22:09.144143 kernel: signal: max sigframe size: 3632 Mar 14 00:22:09.144157 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:22:09.144174 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:22:09.144188 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:22:09.144202 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:22:09.144216 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:22:09.144230 kernel: .... node #0, CPUs: #1 Mar 14 00:22:09.144247 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 14 00:22:09.144262 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:22:09.144276 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:22:09.144291 kernel: smpboot: Max logical packages: 1 Mar 14 00:22:09.144308 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Mar 14 00:22:09.144321 kernel: devtmpfs: initialized Mar 14 00:22:09.144335 kernel: x86/mm: Memory block size: 128MB Mar 14 00:22:09.144349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 14 00:22:09.144363 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:22:09.144378 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:22:09.144392 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:22:09.144406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:22:09.144420 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:22:09.144436 kernel: audit: type=2000 audit(1773447727.030:1): state=initialized audit_enabled=0 res=1 Mar 14 00:22:09.144450 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:22:09.144464 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:22:09.144478 kernel: cpuidle: using governor menu Mar 14 00:22:09.144492 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:22:09.144506 kernel: dca service started, version 1.12.1 Mar 14 00:22:09.144522 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 14 00:22:09.144536 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 14 00:22:09.144550 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:22:09.144567 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:22:09.144581 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:22:09.144595 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:22:09.144609 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:22:09.144623 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:22:09.144638 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:22:09.144651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:22:09.144666 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:22:09.144691 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:22:09.144705 kernel: ACPI: Interpreter enabled Mar 14 00:22:09.144720 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:22:09.144734 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:22:09.144748 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:22:09.144762 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 14 00:22:09.144776 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 14 00:22:09.144790 kernel: iommu: Default domain type: Translated Mar 14 00:22:09.144804 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:22:09.144819 kernel: efivars: Registered efivars operations Mar 14 00:22:09.144840 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:22:09.144855 kernel: PCI: System does not support PCI Mar 14 00:22:09.144868 kernel: vgaarb: loaded Mar 14 00:22:09.144883 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 14 00:22:09.144897 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:22:09.144911 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:22:09.144926 kernel: pnp: PnP ACPI init Mar 14 00:22:09.144940 kernel: pnp: PnP ACPI: found 3 devices Mar 14 00:22:09.144954 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:22:09.144971 kernel: NET: Registered PF_INET protocol family Mar 14 00:22:09.144985 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:22:09.144999 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 14 00:22:09.145013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:22:09.145028 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:22:09.145042 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 14 00:22:09.145056 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 14 00:22:09.145071 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.145085 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.145104 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:22:09.145118 kernel: NET: Registered PF_XDP protocol family Mar 14 00:22:09.145133 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:22:09.145147 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:22:09.145162 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 14 00:22:09.145176 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:22:09.145190 kernel: Initialise system trusted keyrings Mar 14 00:22:09.145204 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 14 00:22:09.145220 kernel: Key type asymmetric registered Mar 14 00:22:09.145234 kernel: Asymmetric key parser 'x509' registered Mar 14 00:22:09.145248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:22:09.145262 kernel: io scheduler mq-deadline registered Mar 14 00:22:09.145276 kernel: io scheduler kyber registered Mar 14 00:22:09.145290 kernel: io scheduler bfq registered Mar 14 00:22:09.145304 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:22:09.145318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:22:09.145333 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:22:09.145347 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 14 00:22:09.145364 kernel: i8042: PNP: No PS/2 controller found. Mar 14 00:22:09.145548 kernel: rtc_cmos 00:02: registered as rtc0 Mar 14 00:22:09.145687 kernel: rtc_cmos 00:02: setting system clock to 2026-03-14T00:22:08 UTC (1773447728) Mar 14 00:22:09.145818 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 14 00:22:09.145836 kernel: intel_pstate: CPU model not supported Mar 14 00:22:09.145850 kernel: efifb: probing for efifb Mar 14 00:22:09.145865 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 14 00:22:09.145884 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 14 00:22:09.145898 kernel: efifb: scrolling: redraw Mar 14 00:22:09.145912 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:22:09.145926 kernel: Console: switching to colour frame buffer device 128x48 Mar 14 00:22:09.145940 kernel: fb0: EFI VGA frame buffer device Mar 14 00:22:09.145954 kernel: pstore: Using crash dump compression: deflate Mar 14 00:22:09.145968 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:22:09.145986 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:22:09.146000 kernel: Segment Routing with IPv6 Mar 14 00:22:09.146017 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:22:09.146032 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:22:09.146046 kernel: Key type dns_resolver registered Mar 14 00:22:09.146059 kernel: IPI shorthand broadcast: enabled Mar 14 00:22:09.146074 kernel: sched_clock: Marking stable (919002800, 52425000)->(1212301900, -240874100) Mar 14 00:22:09.146088 kernel: registered taskstats version 1 Mar 14 00:22:09.146102 kernel: Loading compiled-in X.509 certificates Mar 14 00:22:09.146116 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:22:09.146130 kernel: Key type .fscrypt registered Mar 14 00:22:09.146147 kernel: Key type fscrypt-provisioning registered Mar 14 00:22:09.146161 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:22:09.146175 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:22:09.146189 kernel: ima: No architecture policies found Mar 14 00:22:09.146203 kernel: clk: Disabling unused clocks Mar 14 00:22:09.146217 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:22:09.146231 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:22:09.146245 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:22:09.146264 kernel: Run /init as init process Mar 14 00:22:09.146281 kernel: with arguments: Mar 14 00:22:09.146299 kernel: /init Mar 14 00:22:09.146313 kernel: with environment: Mar 14 00:22:09.146326 kernel: HOME=/ Mar 14 00:22:09.146340 kernel: TERM=linux Mar 14 00:22:09.146358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:09.146376 systemd[1]: Detected virtualization microsoft. Mar 14 00:22:09.146390 systemd[1]: Detected architecture x86-64. Mar 14 00:22:09.146408 systemd[1]: Running in initrd. Mar 14 00:22:09.146422 systemd[1]: No hostname configured, using default hostname. Mar 14 00:22:09.146436 systemd[1]: Hostname set to . Mar 14 00:22:09.146452 systemd[1]: Initializing machine ID from random generator. Mar 14 00:22:09.146467 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:22:09.146481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:09.146496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:09.146512 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:22:09.146530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:09.146545 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:22:09.146560 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:22:09.146582 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:22:09.146598 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:22:09.146613 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:09.146628 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:09.146647 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:09.146662 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:09.146677 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:09.146812 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:09.146825 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:09.146834 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:09.146843 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:22:09.146856 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:22:09.146864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:09.146882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:09.146890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:09.146899 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:09.146911 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:22:09.146921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:09.146934 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:22:09.146943 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:22:09.146956 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:09.146967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:09.146980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:09.147012 systemd-journald[177]: Collecting audit messages is disabled. Mar 14 00:22:09.147038 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:09.147052 systemd-journald[177]: Journal started Mar 14 00:22:09.147074 systemd-journald[177]: Runtime Journal (/run/log/journal/244145ea01d04135803171efd0a8e389) is 8.0M, max 158.7M, 150.7M free. Mar 14 00:22:09.156736 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:09.160034 systemd-modules-load[178]: Inserted module 'overlay' Mar 14 00:22:09.163359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:09.171648 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:22:09.174654 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:09.190980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:09.208704 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:22:09.210276 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:22:09.215439 kernel: Bridge firewalling registered Mar 14 00:22:09.222989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:09.227315 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:22:09.241873 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:09.246565 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 14 00:22:09.250618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:09.257465 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:09.273960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:09.282212 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:09.284193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:09.292747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:09.302866 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:22:09.310868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:09.363987 dracut-cmdline[212]: dracut-dracut-053 Mar 14 00:22:09.363987 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:09.415726 kernel: SCSI subsystem initialized Mar 14 00:22:09.413630 systemd-resolved[216]: Positive Trust Anchors: Mar 14 00:22:09.413644 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:09.413729 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:09.457510 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:22:09.417141 systemd-resolved[216]: Defaulting to hostname 'linux'. Mar 14 00:22:09.418315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:09.423085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:09.477702 kernel: iscsi: registered transport (tcp) Mar 14 00:22:09.502735 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:22:09.502827 kernel: QLogic iSCSI HBA Driver Mar 14 00:22:09.540160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:09.553943 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:22:09.587093 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:22:09.587185 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:22:09.591941 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:22:09.633712 kernel: raid6: avx512x4 gen() 18244 MB/s Mar 14 00:22:09.652689 kernel: raid6: avx512x2 gen() 18253 MB/s Mar 14 00:22:09.672690 kernel: raid6: avx512x1 gen() 17772 MB/s Mar 14 00:22:09.693696 kernel: raid6: avx2x4 gen() 17472 MB/s Mar 14 00:22:09.713689 kernel: raid6: avx2x2 gen() 17661 MB/s Mar 14 00:22:09.736182 kernel: raid6: avx2x1 gen() 13546 MB/s Mar 14 00:22:09.736215 kernel: raid6: using algorithm avx512x2 gen() 18253 MB/s Mar 14 00:22:09.760823 kernel: raid6: .... xor() 30561 MB/s, rmw enabled Mar 14 00:22:09.760859 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:22:09.784709 kernel: xor: automatically using best checksumming function avx Mar 14 00:22:09.935709 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:22:09.946056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:09.962989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:09.975150 systemd-udevd[399]: Using default interface naming scheme 'v255'. Mar 14 00:22:09.979891 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:09.996986 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:22:10.010480 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 14 00:22:10.038770 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:10.048954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:10.093714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:10.120936 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:22:10.159463 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:10.168252 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:22:10.175176 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:10.192315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:10.200225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:10.228942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:22:10.250770 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:22:10.250810 kernel: hv_vmbus: Vmbus version:5.2 Mar 14 00:22:10.250827 kernel: AES CTR mode by8 optimization enabled Mar 14 00:22:10.261292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:10.266367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:10.279478 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:10.290848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:10.291137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:10.307280 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 14 00:22:10.323050 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 14 00:22:10.323565 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:10.344561 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 14 00:22:10.344632 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 14 00:22:10.350013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:10.360283 kernel: PTP clock support registered Mar 14 00:22:10.367284 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:10.381032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:10.381154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:10.409740 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:22:10.409807 kernel: hv_vmbus: registering driver hv_netvsc Mar 14 00:22:10.414948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:10.421375 kernel: hv_vmbus: registering driver hid_hyperv Mar 14 00:22:10.439455 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 14 00:22:10.439522 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 14 00:22:10.443470 kernel: hv_utils: Registering HyperV Utility Driver Mar 14 00:22:10.447621 kernel: hv_vmbus: registering driver hv_utils Mar 14 00:22:10.454509 kernel: hv_utils: Heartbeat IC version 3.0 Mar 14 00:22:10.454556 kernel: hv_utils: Shutdown IC version 3.2 Mar 14 00:22:10.456708 kernel: hv_utils: TimeSync IC version 4.0 Mar 14 00:22:10.732615 systemd-resolved[216]: Clock change detected. Flushing caches. Mar 14 00:22:10.749323 kernel: hv_vmbus: registering driver hv_storvsc Mar 14 00:22:10.755621 kernel: scsi host0: storvsc_host_t Mar 14 00:22:10.759261 kernel: scsi host1: storvsc_host_t Mar 14 00:22:10.761812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:10.765247 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 14 00:22:10.771271 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 14 00:22:10.781446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:10.807861 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 14 00:22:10.808152 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:22:10.810253 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 14 00:22:10.828280 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 14 00:22:10.828585 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 14 00:22:10.832117 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 14 00:22:10.832411 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 14 00:22:10.838260 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 14 00:22:10.839441 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:10.865627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#93 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 14 00:22:10.865786 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:10.865799 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 14 00:22:10.892248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 14 00:22:10.915259 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: VF slot 1 added Mar 14 00:22:10.925254 kernel: hv_vmbus: registering driver hv_pci Mar 14 00:22:10.933243 kernel: hv_pci 49f91120-3e39-41cb-bfc0-c2f22c1cc636: PCI VMBus probing: Using version 0x10004 Mar 14 00:22:10.944341 kernel: hv_pci 49f91120-3e39-41cb-bfc0-c2f22c1cc636: PCI host bridge to bus 3e39:00 Mar 14 00:22:10.944668 kernel: pci_bus 3e39:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 14 00:22:10.949757 kernel: pci_bus 3e39:00: No busn resource found for root bus, will use [bus 00-ff] Mar 14 00:22:10.957649 kernel: pci 3e39:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 14 00:22:10.964277 kernel: pci 3e39:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 14 00:22:10.971335 kernel: pci 3e39:00:02.0: enabling Extended Tags Mar 14 00:22:10.983621 kernel: pci 3e39:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3e39:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 14 00:22:10.997518 kernel: pci_bus 3e39:00: busn_res: [bus 00-ff] end is updated to 00 Mar 14 00:22:10.997857 kernel: pci 3e39:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 14 00:22:11.037322 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Mar 14 00:22:11.060281 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (449) Mar 14 00:22:11.070634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 14 00:22:11.112278 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 14 00:22:11.132428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 14 00:22:11.151038 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 14 00:22:11.157410 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 14 00:22:11.251032 kernel: mlx5_core 3e39:00:02.0: enabling device (0000 -> 0002) Mar 14 00:22:11.251350 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:22:11.268271 kernel: mlx5_core 3e39:00:02.0: firmware version: 14.30.5026 Mar 14 00:22:11.276249 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:11.286332 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:11.296242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:11.499217 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: VF registering: eth1 Mar 14 00:22:11.505948 kernel: mlx5_core 3e39:00:02.0 eth1: joined to eth0 Mar 14 00:22:11.512251 kernel: mlx5_core 3e39:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 14 00:22:11.541807 kernel: mlx5_core 3e39:00:02.0 enP15929s1: renamed from eth1 Mar 14 00:22:12.308245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:12.309442 disk-uuid[600]: The operation has completed successfully. Mar 14 00:22:12.409741 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:22:12.409863 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:22:12.422494 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:22:12.430337 sh[720]: Success Mar 14 00:22:12.447394 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:22:12.545640 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:22:12.552326 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:22:12.555763 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:22:12.576244 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:22:12.576299 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:12.583713 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:22:12.587059 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:22:12.589956 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:22:12.656345 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:22:12.662718 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:22:12.675403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:22:12.679155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:22:12.708253 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:12.708314 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:12.708336 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:22:12.772512 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:12.786431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:12.808098 systemd-networkd[894]: lo: Link UP Mar 14 00:22:12.808112 systemd-networkd[894]: lo: Gained carrier Mar 14 00:22:12.810369 systemd-networkd[894]: Enumeration completed Mar 14 00:22:12.810470 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:12.812372 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:12.812375 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:12.814551 systemd[1]: Reached target network.target - Network. Mar 14 00:22:12.876597 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:22:12.885295 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:22:12.888743 kernel: mlx5_core 3e39:00:02.0 enP15929s1: Link up Mar 14 00:22:12.888967 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:12.900143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:22:12.914961 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:22:12.922871 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: Data path switched to VF: enP15929s1 Mar 14 00:22:12.916619 systemd-networkd[894]: enP15929s1: Link UP Mar 14 00:22:12.916716 systemd-networkd[894]: eth0: Link UP Mar 14 00:22:12.916857 systemd-networkd[894]: eth0: Gained carrier Mar 14 00:22:12.916869 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:12.926471 systemd-networkd[894]: enP15929s1: Gained carrier Mar 14 00:22:12.952274 systemd-networkd[894]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 14 00:22:13.144276 ignition[905]: Ignition 2.19.0 Mar 14 00:22:13.144290 ignition[905]: Stage: fetch-offline Mar 14 00:22:13.144342 ignition[905]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.144353 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.144482 ignition[905]: parsed url from cmdline: "" Mar 14 00:22:13.144487 ignition[905]: no config URL provided Mar 14 00:22:13.144495 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:22:13.144507 ignition[905]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:22:13.144515 ignition[905]: failed to fetch config: resource requires networking Mar 14 00:22:13.144786 ignition[905]: Ignition finished successfully Mar 14 00:22:13.169174 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:13.179478 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:22:13.195553 ignition[912]: Ignition 2.19.0 Mar 14 00:22:13.195565 ignition[912]: Stage: fetch Mar 14 00:22:13.195807 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.195822 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.195921 ignition[912]: parsed url from cmdline: "" Mar 14 00:22:13.195924 ignition[912]: no config URL provided Mar 14 00:22:13.195929 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:22:13.195936 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:22:13.195959 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 14 00:22:13.316494 ignition[912]: GET result: OK Mar 14 00:22:13.316652 ignition[912]: config has been read from IMDS userdata Mar 14 00:22:13.316686 ignition[912]: parsing config with SHA512: 4d8d072e80e33f7a93c750484c5bd8e577a6ed72ab646f627bd7a0d5bb8610517d835d2a49430b327e2bd4df56d70f47daed0d9f4bd09d943523023ab98a2bb9 Mar 14 00:22:13.325096 unknown[912]: fetched base config from "system" Mar 14 00:22:13.326182 ignition[912]: fetch: fetch complete Mar 14 00:22:13.325112 unknown[912]: fetched base config from "system" Mar 14 00:22:13.326190 ignition[912]: fetch: fetch passed Mar 14 00:22:13.325119 unknown[912]: fetched user config from "azure" Mar 14 00:22:13.326265 ignition[912]: Ignition finished successfully Mar 14 00:22:13.335710 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:22:13.352487 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:22:13.370899 ignition[919]: Ignition 2.19.0 Mar 14 00:22:13.370912 ignition[919]: Stage: kargs Mar 14 00:22:13.371127 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.371140 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.372009 ignition[919]: kargs: kargs passed Mar 14 00:22:13.372054 ignition[919]: Ignition finished successfully Mar 14 00:22:13.381496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:22:13.399420 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:22:13.417049 ignition[925]: Ignition 2.19.0 Mar 14 00:22:13.417062 ignition[925]: Stage: disks Mar 14 00:22:13.419751 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:22:13.417292 ignition[925]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.417305 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.428067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:13.418125 ignition[925]: disks: disks passed Mar 14 00:22:13.434029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:22:13.418169 ignition[925]: Ignition finished successfully Mar 14 00:22:13.437978 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:13.458535 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:13.461992 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:13.478407 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:22:13.506122 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 14 00:22:13.513891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:22:13.530346 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:22:13.625241 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:22:13.625831 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:22:13.632038 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:13.648317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:13.656346 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:22:13.665677 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 14 00:22:13.666360 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:22:13.682935 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:13.682973 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:13.682994 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:22:13.679146 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:22:13.679183 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:13.699236 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:22:13.703646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:13.706910 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:22:13.721492 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:22:13.885326 coreos-metadata[947]: Mar 14 00:22:13.885 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 14 00:22:13.890542 coreos-metadata[947]: Mar 14 00:22:13.890 INFO Fetch successful Mar 14 00:22:13.890542 coreos-metadata[947]: Mar 14 00:22:13.890 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 14 00:22:13.900340 coreos-metadata[947]: Mar 14 00:22:13.898 INFO Fetch successful Mar 14 00:22:13.904286 coreos-metadata[947]: Mar 14 00:22:13.904 INFO wrote hostname ci-4081.3.6-n-2b39e14e44 to /sysroot/etc/hostname Mar 14 00:22:13.910297 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:22:13.920363 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:22:13.934536 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:22:13.945615 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:22:13.955291 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:22:14.195653 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:14.215372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:22:14.220457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:22:14.234201 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:22:14.241712 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:14.259872 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:22:14.273478 ignition[1065]: INFO : Ignition 2.19.0 Mar 14 00:22:14.273478 ignition[1065]: INFO : Stage: mount Mar 14 00:22:14.282683 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:14.282683 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:14.282683 ignition[1065]: INFO : mount: mount passed Mar 14 00:22:14.282683 ignition[1065]: INFO : Ignition finished successfully Mar 14 00:22:14.277562 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:22:14.302384 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:22:14.541502 systemd-networkd[894]: eth0: Gained IPv6LL Mar 14 00:22:14.631414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:14.658247 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Mar 14 00:22:14.666349 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:14.666413 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:14.669238 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:22:14.676258 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:22:14.678057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:14.705442 ignition[1092]: INFO : Ignition 2.19.0 Mar 14 00:22:14.708109 ignition[1092]: INFO : Stage: files Mar 14 00:22:14.708109 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:14.708109 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:14.708109 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:22:14.720487 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:22:14.720487 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:22:14.744552 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:22:14.749080 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:22:14.753166 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:22:14.749513 unknown[1092]: wrote ssh authorized keys file for user: core Mar 14 00:22:14.760409 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:14.760409 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:22:14.787450 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:22:14.829753 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:14.829753 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 00:22:15.259080 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:22:15.606065 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:15.606065 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:22:15.617005 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:15.623414 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:15.623414 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:22:15.623414 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:15.637826 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:15.637826 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:15.650864 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:15.650864 ignition[1092]: INFO : files: files passed Mar 14 00:22:15.650864 ignition[1092]: INFO : Ignition finished successfully Mar 14 00:22:15.643592 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:22:15.664031 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:22:15.670411 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:22:15.681437 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:22:15.681562 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:22:15.799870 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:15.799870 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:15.817932 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:15.805084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:15.809790 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:22:15.831459 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:22:15.853359 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:22:15.853454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:22:15.861074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:22:15.867179 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:22:15.870453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:22:15.877378 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:22:15.892731 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:15.909421 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:22:15.923094 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:15.926859 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:15.936967 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:22:15.942246 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:22:15.942407 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:15.953593 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:22:15.956784 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:22:15.962512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:22:15.966140 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:15.972309 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:15.976206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:22:15.976654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:15.977152 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:22:15.977656 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:22:15.978139 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:22:15.978605 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:22:15.978781 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:15.979747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:15.980323 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:15.980758 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:22:16.002907 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:16.009988 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:22:16.010132 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:16.014099 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:22:16.014269 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:16.020612 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:22:16.020742 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:22:16.024848 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:22:16.024960 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:22:16.092528 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:22:16.095485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:22:16.095703 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:16.112348 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:22:16.115067 ignition[1145]: INFO : Ignition 2.19.0 Mar 14 00:22:16.115067 ignition[1145]: INFO : Stage: umount Mar 14 00:22:16.115067 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:16.115067 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:16.155705 ignition[1145]: INFO : umount: umount passed Mar 14 00:22:16.155705 ignition[1145]: INFO : Ignition finished successfully Mar 14 00:22:16.120085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:22:16.120273 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:16.126157 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:22:16.126306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:16.128865 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:22:16.128963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:22:16.130047 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:22:16.130424 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:22:16.130869 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:22:16.130968 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:22:16.131318 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:22:16.131417 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:22:16.131830 systemd[1]: Stopped target network.target - Network. Mar 14 00:22:16.132309 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:22:16.132415 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:16.132888 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:22:16.133282 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:22:16.212370 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:16.220426 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:22:16.220531 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:22:16.221096 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:22:16.221151 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:16.221667 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:22:16.221701 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:16.222200 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:22:16.222265 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:22:16.222702 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:22:16.222737 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:16.223326 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:22:16.223720 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:22:16.225393 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:22:16.225999 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:22:16.226085 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:22:16.295218 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:22:16.295404 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:22:16.296266 systemd-networkd[894]: eth0: DHCPv6 lease lost Mar 14 00:22:16.302770 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:22:16.302920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:22:16.309376 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:22:16.309452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:16.326980 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:22:16.333171 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:22:16.333268 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:16.336893 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:22:16.336957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:16.341364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:22:16.341423 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:16.346287 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:22:16.346338 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:16.357415 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:16.394311 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:22:16.394719 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:16.403794 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:22:16.403886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:16.408676 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:22:16.408718 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:16.424712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:22:16.424785 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:16.434944 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:22:16.434992 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:16.438062 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:16.438117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:16.456452 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:22:16.477751 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: Data path switched from VF: enP15929s1 Mar 14 00:22:16.459859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:22:16.459922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:16.463581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:16.463641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:16.474034 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:22:16.474138 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:22:16.505825 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:22:16.505942 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:22:21.434180 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:22:21.434340 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:22:21.443713 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:22:21.447583 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:22:21.450877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:21.464385 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:22:22.035511 systemd[1]: Switching root. Mar 14 00:22:22.096390 systemd-journald[177]: Journal stopped Mar 14 00:22:09.136054 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:22:09.136077 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:09.136092 kernel: BIOS-provided physical RAM map: Mar 14 00:22:09.136099 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:22:09.136104 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 14 00:22:09.136110 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 14 00:22:09.136117 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 14 00:22:09.136127 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 14 00:22:09.136137 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 14 00:22:09.136143 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 14 00:22:09.136149 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 14 00:22:09.136159 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 14 00:22:09.136166 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 14 00:22:09.136172 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 14 00:22:09.136186 kernel: printk: bootconsole [earlyser0] enabled Mar 14 00:22:09.136194 kernel: NX (Execute Disable) protection: active Mar 14 00:22:09.136201 kernel: APIC: Static calls initialized Mar 14 00:22:09.136211 kernel: efi: EFI v2.7 by Microsoft Mar 14 00:22:09.136220 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 14 00:22:09.136227 kernel: SMBIOS 3.1.0 present. Mar 14 00:22:09.136234 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 14 00:22:09.136245 kernel: Hypervisor detected: Microsoft Hyper-V Mar 14 00:22:09.136253 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 14 00:22:09.136260 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 14 00:22:09.136289 kernel: Hyper-V: Nested features: 0x1e0101 Mar 14 00:22:09.136299 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 14 00:22:09.136306 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 14 00:22:09.136342 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 14 00:22:09.136358 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 14 00:22:09.136370 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 14 00:22:09.136378 kernel: tsc: Detected 2593.908 MHz processor Mar 14 00:22:09.136389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:22:09.136399 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:22:09.136408 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 14 00:22:09.136418 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:22:09.136425 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:22:09.136436 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 14 00:22:09.136443 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 14 00:22:09.136450 kernel: Using GB pages for direct mapping Mar 14 00:22:09.136461 kernel: Secure boot disabled Mar 14 00:22:09.136473 kernel: ACPI: Early table checksum verification disabled Mar 14 00:22:09.136487 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 14 00:22:09.136495 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136503 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136514 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 14 00:22:09.136522 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 14 00:22:09.136555 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136562 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136575 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136605 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136612 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136620 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 14 00:22:09.136630 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 14 00:22:09.136639 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 14 00:22:09.136647 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 14 00:22:09.136656 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 14 00:22:09.136667 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 14 00:22:09.136677 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 14 00:22:09.136699 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 14 00:22:09.136709 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 14 00:22:09.136719 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 14 00:22:09.136726 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:22:09.136734 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:22:09.142289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 14 00:22:09.142310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 14 00:22:09.142325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 14 00:22:09.142344 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 14 00:22:09.142358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 14 00:22:09.142371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 14 00:22:09.142385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 14 00:22:09.142398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 14 00:22:09.142412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 14 00:22:09.142426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 14 00:22:09.142440 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 14 00:22:09.142457 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 14 00:22:09.142471 kernel: Zone ranges: Mar 14 00:22:09.142485 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:22:09.142499 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 14 00:22:09.142512 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 14 00:22:09.142526 kernel: Movable zone start for each node Mar 14 00:22:09.142539 kernel: Early memory node ranges Mar 14 00:22:09.142558 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:22:09.142572 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 14 00:22:09.142589 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 14 00:22:09.142603 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 14 00:22:09.142616 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 14 00:22:09.142629 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 14 00:22:09.142644 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:22:09.142658 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:22:09.142672 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 14 00:22:09.142726 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 14 00:22:09.142740 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 14 00:22:09.142756 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 14 00:22:09.142769 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:22:09.142783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:22:09.142797 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:22:09.142811 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 14 00:22:09.142826 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:22:09.142838 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 14 00:22:09.142856 kernel: Booting paravirtualized kernel on Hyper-V Mar 14 00:22:09.142870 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:22:09.142889 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:22:09.142904 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:22:09.142918 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:22:09.142932 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:22:09.142945 kernel: Hyper-V: PV spinlocks enabled Mar 14 00:22:09.142959 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:22:09.142975 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:09.142990 kernel: random: crng init done Mar 14 00:22:09.143005 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 14 00:22:09.143018 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:22:09.143031 kernel: Fallback order for Node 0: 0 Mar 14 00:22:09.143044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 14 00:22:09.143057 kernel: Policy zone: Normal Mar 14 00:22:09.143069 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:22:09.143085 kernel: software IO TLB: area num 2. Mar 14 00:22:09.143100 kernel: Memory: 8066052K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316916K reserved, 0K cma-reserved) Mar 14 00:22:09.143120 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:22:09.143153 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:22:09.143166 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:22:09.143180 kernel: Dynamic Preempt: voluntary Mar 14 00:22:09.143197 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:22:09.143212 kernel: rcu: RCU event tracing is enabled. Mar 14 00:22:09.143227 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:22:09.143241 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:22:09.143255 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:22:09.143269 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:22:09.143286 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:22:09.143301 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:22:09.143315 kernel: Using NULL legacy PIC Mar 14 00:22:09.143329 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 14 00:22:09.143343 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:22:09.143357 kernel: Console: colour dummy device 80x25 Mar 14 00:22:09.143371 kernel: printk: console [tty1] enabled Mar 14 00:22:09.143385 kernel: printk: console [ttyS0] enabled Mar 14 00:22:09.143402 kernel: printk: bootconsole [earlyser0] disabled Mar 14 00:22:09.143416 kernel: ACPI: Core revision 20230628 Mar 14 00:22:09.143436 kernel: Failed to register legacy timer interrupt Mar 14 00:22:09.143450 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:22:09.143464 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 14 00:22:09.143478 kernel: Hyper-V: Using IPI hypercalls Mar 14 00:22:09.143492 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 14 00:22:09.143506 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 14 00:22:09.143521 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 14 00:22:09.143538 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 14 00:22:09.143552 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 14 00:22:09.143566 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 14 00:22:09.143580 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Mar 14 00:22:09.143594 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 14 00:22:09.143608 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 14 00:22:09.143622 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:22:09.143636 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:22:09.143650 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:22:09.143664 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 14 00:22:09.143724 kernel: RETBleed: Vulnerable Mar 14 00:22:09.143738 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:22:09.143753 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:09.143767 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:09.143781 kernel: active return thunk: its_return_thunk Mar 14 00:22:09.143795 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:22:09.143809 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:22:09.143823 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:22:09.143837 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:22:09.143851 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:22:09.143869 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:22:09.143883 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:22:09.143897 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:22:09.143910 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 14 00:22:09.143924 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 14 00:22:09.143938 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 14 00:22:09.143952 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 14 00:22:09.143966 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:22:09.143986 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:22:09.144000 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:22:09.144014 kernel: landlock: Up and running. Mar 14 00:22:09.144027 kernel: SELinux: Initializing. Mar 14 00:22:09.144044 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.144058 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.144072 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 14 00:22:09.144086 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:09.144101 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:09.144115 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:09.144129 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 14 00:22:09.144143 kernel: signal: max sigframe size: 3632 Mar 14 00:22:09.144157 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:22:09.144174 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:22:09.144188 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:22:09.144202 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:22:09.144216 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:22:09.144230 kernel: .... node #0, CPUs: #1 Mar 14 00:22:09.144247 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 14 00:22:09.144262 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:22:09.144276 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:22:09.144291 kernel: smpboot: Max logical packages: 1 Mar 14 00:22:09.144308 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Mar 14 00:22:09.144321 kernel: devtmpfs: initialized Mar 14 00:22:09.144335 kernel: x86/mm: Memory block size: 128MB Mar 14 00:22:09.144349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 14 00:22:09.144363 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:22:09.144378 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:22:09.144392 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:22:09.144406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:22:09.144420 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:22:09.144436 kernel: audit: type=2000 audit(1773447727.030:1): state=initialized audit_enabled=0 res=1 Mar 14 00:22:09.144450 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:22:09.144464 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:22:09.144478 kernel: cpuidle: using governor menu Mar 14 00:22:09.144492 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:22:09.144506 kernel: dca service started, version 1.12.1 Mar 14 00:22:09.144522 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 14 00:22:09.144536 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 14 00:22:09.144550 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:22:09.144567 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:22:09.144581 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:22:09.144595 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:22:09.144609 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:22:09.144623 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:22:09.144638 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:22:09.144651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:22:09.144666 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:22:09.144691 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:22:09.144705 kernel: ACPI: Interpreter enabled Mar 14 00:22:09.144720 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:22:09.144734 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:22:09.144748 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:22:09.144762 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 14 00:22:09.144776 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 14 00:22:09.144790 kernel: iommu: Default domain type: Translated Mar 14 00:22:09.144804 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:22:09.144819 kernel: efivars: Registered efivars operations Mar 14 00:22:09.144840 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:22:09.144855 kernel: PCI: System does not support PCI Mar 14 00:22:09.144868 kernel: vgaarb: loaded Mar 14 00:22:09.144883 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 14 00:22:09.144897 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:22:09.144911 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:22:09.144926 kernel: pnp: PnP ACPI init Mar 14 00:22:09.144940 kernel: pnp: PnP ACPI: found 3 devices Mar 14 00:22:09.144954 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:22:09.144971 kernel: NET: Registered PF_INET protocol family Mar 14 00:22:09.144985 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:22:09.144999 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 14 00:22:09.145013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:22:09.145028 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:22:09.145042 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 14 00:22:09.145056 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 14 00:22:09.145071 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.145085 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 14 00:22:09.145104 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:22:09.145118 kernel: NET: Registered PF_XDP protocol family Mar 14 00:22:09.145133 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:22:09.145147 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:22:09.145162 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 14 00:22:09.145176 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:22:09.145190 kernel: Initialise system trusted keyrings Mar 14 00:22:09.145204 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 14 00:22:09.145220 kernel: Key type asymmetric registered Mar 14 00:22:09.145234 kernel: Asymmetric key parser 'x509' registered Mar 14 00:22:09.145248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:22:09.145262 kernel: io scheduler mq-deadline registered Mar 14 00:22:09.145276 kernel: io scheduler kyber registered Mar 14 00:22:09.145290 kernel: io scheduler bfq registered Mar 14 00:22:09.145304 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:22:09.145318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:22:09.145333 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:22:09.145347 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 14 00:22:09.145364 kernel: i8042: PNP: No PS/2 controller found. Mar 14 00:22:09.145548 kernel: rtc_cmos 00:02: registered as rtc0 Mar 14 00:22:09.145687 kernel: rtc_cmos 00:02: setting system clock to 2026-03-14T00:22:08 UTC (1773447728) Mar 14 00:22:09.145818 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 14 00:22:09.145836 kernel: intel_pstate: CPU model not supported Mar 14 00:22:09.145850 kernel: efifb: probing for efifb Mar 14 00:22:09.145865 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 14 00:22:09.145884 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 14 00:22:09.145898 kernel: efifb: scrolling: redraw Mar 14 00:22:09.145912 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:22:09.145926 kernel: Console: switching to colour frame buffer device 128x48 Mar 14 00:22:09.145940 kernel: fb0: EFI VGA frame buffer device Mar 14 00:22:09.145954 kernel: pstore: Using crash dump compression: deflate Mar 14 00:22:09.145968 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:22:09.145986 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:22:09.146000 kernel: Segment Routing with IPv6 Mar 14 00:22:09.146017 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:22:09.146032 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:22:09.146046 kernel: Key type dns_resolver registered Mar 14 00:22:09.146059 kernel: IPI shorthand broadcast: enabled Mar 14 00:22:09.146074 kernel: sched_clock: Marking stable (919002800, 52425000)->(1212301900, -240874100) Mar 14 00:22:09.146088 kernel: registered taskstats version 1 Mar 14 00:22:09.146102 kernel: Loading compiled-in X.509 certificates Mar 14 00:22:09.146116 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:22:09.146130 kernel: Key type .fscrypt registered Mar 14 00:22:09.146147 kernel: Key type fscrypt-provisioning registered Mar 14 00:22:09.146161 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:22:09.146175 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:22:09.146189 kernel: ima: No architecture policies found Mar 14 00:22:09.146203 kernel: clk: Disabling unused clocks Mar 14 00:22:09.146217 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:22:09.146231 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:22:09.146245 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:22:09.146264 kernel: Run /init as init process Mar 14 00:22:09.146281 kernel: with arguments: Mar 14 00:22:09.146299 kernel: /init Mar 14 00:22:09.146313 kernel: with environment: Mar 14 00:22:09.146326 kernel: HOME=/ Mar 14 00:22:09.146340 kernel: TERM=linux Mar 14 00:22:09.146358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:09.146376 systemd[1]: Detected virtualization microsoft. Mar 14 00:22:09.146390 systemd[1]: Detected architecture x86-64. Mar 14 00:22:09.146408 systemd[1]: Running in initrd. Mar 14 00:22:09.146422 systemd[1]: No hostname configured, using default hostname. Mar 14 00:22:09.146436 systemd[1]: Hostname set to . Mar 14 00:22:09.146452 systemd[1]: Initializing machine ID from random generator. Mar 14 00:22:09.146467 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:22:09.146481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:09.146496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:09.146512 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:22:09.146530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:09.146545 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:22:09.146560 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:22:09.146582 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:22:09.146598 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:22:09.146613 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:09.146628 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:09.146647 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:09.146662 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:09.146677 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:09.146812 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:09.146825 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:09.146834 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:09.146843 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:22:09.146856 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:22:09.146864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:09.146882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:09.146890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:09.146899 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:09.146911 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:22:09.146921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:09.146934 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:22:09.146943 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:22:09.146956 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:09.146967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:09.146980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:09.147012 systemd-journald[177]: Collecting audit messages is disabled. Mar 14 00:22:09.147038 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:09.147052 systemd-journald[177]: Journal started Mar 14 00:22:09.147074 systemd-journald[177]: Runtime Journal (/run/log/journal/244145ea01d04135803171efd0a8e389) is 8.0M, max 158.7M, 150.7M free. Mar 14 00:22:09.156736 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:09.160034 systemd-modules-load[178]: Inserted module 'overlay' Mar 14 00:22:09.163359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:09.171648 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:22:09.174654 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:09.190980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:09.208704 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:22:09.210276 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:22:09.215439 kernel: Bridge firewalling registered Mar 14 00:22:09.222989 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:09.227315 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:22:09.241873 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:09.246565 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 14 00:22:09.250618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:09.257465 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:09.273960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:09.282212 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:09.284193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:09.292747 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:09.302866 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:22:09.310868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:09.363987 dracut-cmdline[212]: dracut-dracut-053 Mar 14 00:22:09.363987 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:09.415726 kernel: SCSI subsystem initialized Mar 14 00:22:09.413630 systemd-resolved[216]: Positive Trust Anchors: Mar 14 00:22:09.413644 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:09.413729 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:09.457510 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:22:09.417141 systemd-resolved[216]: Defaulting to hostname 'linux'. Mar 14 00:22:09.418315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:09.423085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:09.477702 kernel: iscsi: registered transport (tcp) Mar 14 00:22:09.502735 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:22:09.502827 kernel: QLogic iSCSI HBA Driver Mar 14 00:22:09.540160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:09.553943 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:22:09.587093 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:22:09.587185 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:22:09.591941 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:22:09.633712 kernel: raid6: avx512x4 gen() 18244 MB/s Mar 14 00:22:09.652689 kernel: raid6: avx512x2 gen() 18253 MB/s Mar 14 00:22:09.672690 kernel: raid6: avx512x1 gen() 17772 MB/s Mar 14 00:22:09.693696 kernel: raid6: avx2x4 gen() 17472 MB/s Mar 14 00:22:09.713689 kernel: raid6: avx2x2 gen() 17661 MB/s Mar 14 00:22:09.736182 kernel: raid6: avx2x1 gen() 13546 MB/s Mar 14 00:22:09.736215 kernel: raid6: using algorithm avx512x2 gen() 18253 MB/s Mar 14 00:22:09.760823 kernel: raid6: .... xor() 30561 MB/s, rmw enabled Mar 14 00:22:09.760859 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:22:09.784709 kernel: xor: automatically using best checksumming function avx Mar 14 00:22:09.935709 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:22:09.946056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:09.962989 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:09.975150 systemd-udevd[399]: Using default interface naming scheme 'v255'. Mar 14 00:22:09.979891 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:09.996986 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:22:10.010480 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Mar 14 00:22:10.038770 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:10.048954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:10.093714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:10.120936 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:22:10.159463 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:10.168252 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:22:10.175176 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:10.192315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:10.200225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:10.228942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:22:10.250770 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:22:10.250810 kernel: hv_vmbus: Vmbus version:5.2 Mar 14 00:22:10.250827 kernel: AES CTR mode by8 optimization enabled Mar 14 00:22:10.261292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:10.266367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:10.279478 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:10.290848 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:10.291137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:10.307280 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 14 00:22:10.323050 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 14 00:22:10.323565 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:10.344561 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 14 00:22:10.344632 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 14 00:22:10.350013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:10.360283 kernel: PTP clock support registered Mar 14 00:22:10.367284 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:10.381032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:10.381154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:10.409740 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:22:10.409807 kernel: hv_vmbus: registering driver hv_netvsc Mar 14 00:22:10.414948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:10.421375 kernel: hv_vmbus: registering driver hid_hyperv Mar 14 00:22:10.439455 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 14 00:22:10.439522 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 14 00:22:10.443470 kernel: hv_utils: Registering HyperV Utility Driver Mar 14 00:22:10.447621 kernel: hv_vmbus: registering driver hv_utils Mar 14 00:22:10.454509 kernel: hv_utils: Heartbeat IC version 3.0 Mar 14 00:22:10.454556 kernel: hv_utils: Shutdown IC version 3.2 Mar 14 00:22:10.456708 kernel: hv_utils: TimeSync IC version 4.0 Mar 14 00:22:10.732615 systemd-resolved[216]: Clock change detected. Flushing caches. Mar 14 00:22:10.749323 kernel: hv_vmbus: registering driver hv_storvsc Mar 14 00:22:10.755621 kernel: scsi host0: storvsc_host_t Mar 14 00:22:10.759261 kernel: scsi host1: storvsc_host_t Mar 14 00:22:10.761812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:10.765247 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 14 00:22:10.771271 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 14 00:22:10.781446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:10.807861 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 14 00:22:10.808152 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:22:10.810253 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 14 00:22:10.828280 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 14 00:22:10.828585 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 14 00:22:10.832117 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 14 00:22:10.832411 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 14 00:22:10.838260 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 14 00:22:10.839441 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:10.865627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#93 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 14 00:22:10.865786 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:10.865799 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 14 00:22:10.892248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 14 00:22:10.915259 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: VF slot 1 added Mar 14 00:22:10.925254 kernel: hv_vmbus: registering driver hv_pci Mar 14 00:22:10.933243 kernel: hv_pci 49f91120-3e39-41cb-bfc0-c2f22c1cc636: PCI VMBus probing: Using version 0x10004 Mar 14 00:22:10.944341 kernel: hv_pci 49f91120-3e39-41cb-bfc0-c2f22c1cc636: PCI host bridge to bus 3e39:00 Mar 14 00:22:10.944668 kernel: pci_bus 3e39:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 14 00:22:10.949757 kernel: pci_bus 3e39:00: No busn resource found for root bus, will use [bus 00-ff] Mar 14 00:22:10.957649 kernel: pci 3e39:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 14 00:22:10.964277 kernel: pci 3e39:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 14 00:22:10.971335 kernel: pci 3e39:00:02.0: enabling Extended Tags Mar 14 00:22:10.983621 kernel: pci 3e39:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3e39:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 14 00:22:10.997518 kernel: pci_bus 3e39:00: busn_res: [bus 00-ff] end is updated to 00 Mar 14 00:22:10.997857 kernel: pci 3e39:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 14 00:22:11.037322 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Mar 14 00:22:11.060281 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (449) Mar 14 00:22:11.070634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 14 00:22:11.112278 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 14 00:22:11.132428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 14 00:22:11.151038 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 14 00:22:11.157410 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 14 00:22:11.251032 kernel: mlx5_core 3e39:00:02.0: enabling device (0000 -> 0002) Mar 14 00:22:11.251350 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:22:11.268271 kernel: mlx5_core 3e39:00:02.0: firmware version: 14.30.5026 Mar 14 00:22:11.276249 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:11.286332 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:11.296242 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:11.499217 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: VF registering: eth1 Mar 14 00:22:11.505948 kernel: mlx5_core 3e39:00:02.0 eth1: joined to eth0 Mar 14 00:22:11.512251 kernel: mlx5_core 3e39:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 14 00:22:11.541807 kernel: mlx5_core 3e39:00:02.0 enP15929s1: renamed from eth1 Mar 14 00:22:12.308245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:22:12.309442 disk-uuid[600]: The operation has completed successfully. Mar 14 00:22:12.409741 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:22:12.409863 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:22:12.422494 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:22:12.430337 sh[720]: Success Mar 14 00:22:12.447394 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:22:12.545640 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:22:12.552326 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:22:12.555763 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:22:12.576244 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:22:12.576299 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:12.583713 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:22:12.587059 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:22:12.589956 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:22:12.656345 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:22:12.662718 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:22:12.675403 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:22:12.679155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:22:12.708253 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:12.708314 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:12.708336 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:22:12.772512 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:12.786431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:12.808098 systemd-networkd[894]: lo: Link UP Mar 14 00:22:12.808112 systemd-networkd[894]: lo: Gained carrier Mar 14 00:22:12.810369 systemd-networkd[894]: Enumeration completed Mar 14 00:22:12.810470 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:12.812372 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:12.812375 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:12.814551 systemd[1]: Reached target network.target - Network. Mar 14 00:22:12.876597 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:22:12.885295 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:22:12.888743 kernel: mlx5_core 3e39:00:02.0 enP15929s1: Link up Mar 14 00:22:12.888967 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:12.900143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:22:12.914961 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:22:12.922871 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: Data path switched to VF: enP15929s1 Mar 14 00:22:12.916619 systemd-networkd[894]: enP15929s1: Link UP Mar 14 00:22:12.916716 systemd-networkd[894]: eth0: Link UP Mar 14 00:22:12.916857 systemd-networkd[894]: eth0: Gained carrier Mar 14 00:22:12.916869 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:12.926471 systemd-networkd[894]: enP15929s1: Gained carrier Mar 14 00:22:12.952274 systemd-networkd[894]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 14 00:22:13.144276 ignition[905]: Ignition 2.19.0 Mar 14 00:22:13.144290 ignition[905]: Stage: fetch-offline Mar 14 00:22:13.144342 ignition[905]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.144353 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.144482 ignition[905]: parsed url from cmdline: "" Mar 14 00:22:13.144487 ignition[905]: no config URL provided Mar 14 00:22:13.144495 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:22:13.144507 ignition[905]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:22:13.144515 ignition[905]: failed to fetch config: resource requires networking Mar 14 00:22:13.144786 ignition[905]: Ignition finished successfully Mar 14 00:22:13.169174 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:13.179478 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:22:13.195553 ignition[912]: Ignition 2.19.0 Mar 14 00:22:13.195565 ignition[912]: Stage: fetch Mar 14 00:22:13.195807 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.195822 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.195921 ignition[912]: parsed url from cmdline: "" Mar 14 00:22:13.195924 ignition[912]: no config URL provided Mar 14 00:22:13.195929 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:22:13.195936 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:22:13.195959 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 14 00:22:13.316494 ignition[912]: GET result: OK Mar 14 00:22:13.316652 ignition[912]: config has been read from IMDS userdata Mar 14 00:22:13.316686 ignition[912]: parsing config with SHA512: 4d8d072e80e33f7a93c750484c5bd8e577a6ed72ab646f627bd7a0d5bb8610517d835d2a49430b327e2bd4df56d70f47daed0d9f4bd09d943523023ab98a2bb9 Mar 14 00:22:13.325096 unknown[912]: fetched base config from "system" Mar 14 00:22:13.326182 ignition[912]: fetch: fetch complete Mar 14 00:22:13.325112 unknown[912]: fetched base config from "system" Mar 14 00:22:13.326190 ignition[912]: fetch: fetch passed Mar 14 00:22:13.325119 unknown[912]: fetched user config from "azure" Mar 14 00:22:13.326265 ignition[912]: Ignition finished successfully Mar 14 00:22:13.335710 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:22:13.352487 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:22:13.370899 ignition[919]: Ignition 2.19.0 Mar 14 00:22:13.370912 ignition[919]: Stage: kargs Mar 14 00:22:13.371127 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.371140 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.372009 ignition[919]: kargs: kargs passed Mar 14 00:22:13.372054 ignition[919]: Ignition finished successfully Mar 14 00:22:13.381496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:22:13.399420 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:22:13.417049 ignition[925]: Ignition 2.19.0 Mar 14 00:22:13.417062 ignition[925]: Stage: disks Mar 14 00:22:13.419751 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:22:13.417292 ignition[925]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:13.417305 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:13.428067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:13.418125 ignition[925]: disks: disks passed Mar 14 00:22:13.434029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:22:13.418169 ignition[925]: Ignition finished successfully Mar 14 00:22:13.437978 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:13.458535 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:13.461992 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:13.478407 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:22:13.506122 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 14 00:22:13.513891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:22:13.530346 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:22:13.625241 kernel: EXT4-fs (sda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:22:13.625831 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:22:13.632038 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:13.648317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:13.656346 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:22:13.665677 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 14 00:22:13.666360 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:22:13.682935 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:13.682973 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:13.682994 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:22:13.679146 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:22:13.679183 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:13.699236 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:22:13.703646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:13.706910 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:22:13.721492 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:22:13.885326 coreos-metadata[947]: Mar 14 00:22:13.885 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 14 00:22:13.890542 coreos-metadata[947]: Mar 14 00:22:13.890 INFO Fetch successful Mar 14 00:22:13.890542 coreos-metadata[947]: Mar 14 00:22:13.890 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 14 00:22:13.900340 coreos-metadata[947]: Mar 14 00:22:13.898 INFO Fetch successful Mar 14 00:22:13.904286 coreos-metadata[947]: Mar 14 00:22:13.904 INFO wrote hostname ci-4081.3.6-n-2b39e14e44 to /sysroot/etc/hostname Mar 14 00:22:13.910297 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:22:13.920363 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:22:13.934536 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:22:13.945615 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:22:13.955291 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:22:14.195653 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:14.215372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:22:14.220457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:22:14.234201 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:22:14.241712 kernel: BTRFS info (device sda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:14.259872 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:22:14.273478 ignition[1065]: INFO : Ignition 2.19.0 Mar 14 00:22:14.273478 ignition[1065]: INFO : Stage: mount Mar 14 00:22:14.282683 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:14.282683 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:14.282683 ignition[1065]: INFO : mount: mount passed Mar 14 00:22:14.282683 ignition[1065]: INFO : Ignition finished successfully Mar 14 00:22:14.277562 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:22:14.302384 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:22:14.541502 systemd-networkd[894]: eth0: Gained IPv6LL Mar 14 00:22:14.631414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:14.658247 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Mar 14 00:22:14.666349 kernel: BTRFS info (device sda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:14.666413 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:14.669238 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:22:14.676258 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:22:14.678057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:14.705442 ignition[1092]: INFO : Ignition 2.19.0 Mar 14 00:22:14.708109 ignition[1092]: INFO : Stage: files Mar 14 00:22:14.708109 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:14.708109 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:14.708109 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:22:14.720487 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:22:14.720487 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:22:14.744552 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:22:14.749080 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:22:14.753166 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:22:14.749513 unknown[1092]: wrote ssh authorized keys file for user: core Mar 14 00:22:14.760409 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:14.760409 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:22:14.787450 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:22:14.829753 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:14.829753 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:14.842361 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 00:22:15.259080 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:22:15.606065 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:15.606065 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:22:15.617005 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:15.623414 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:15.623414 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:22:15.623414 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:15.637826 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:15.637826 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:15.650864 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:15.650864 ignition[1092]: INFO : files: files passed Mar 14 00:22:15.650864 ignition[1092]: INFO : Ignition finished successfully Mar 14 00:22:15.643592 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:22:15.664031 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:22:15.670411 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:22:15.681437 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:22:15.681562 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:22:15.799870 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:15.799870 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:15.817932 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:15.805084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:15.809790 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:22:15.831459 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:22:15.853359 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:22:15.853454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:22:15.861074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:22:15.867179 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:22:15.870453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:22:15.877378 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:22:15.892731 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:15.909421 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:22:15.923094 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:15.926859 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:15.936967 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:22:15.942246 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:22:15.942407 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:15.953593 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:22:15.956784 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:22:15.962512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:22:15.966140 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:15.972309 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:15.976206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:22:15.976654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:15.977152 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:22:15.977656 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:22:15.978139 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:22:15.978605 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:22:15.978781 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:15.979747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:15.980323 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:15.980758 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:22:16.002907 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:16.009988 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:22:16.010132 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:16.014099 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:22:16.014269 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:16.020612 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:22:16.020742 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:22:16.024848 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:22:16.024960 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:22:16.092528 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:22:16.095485 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:22:16.095703 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:16.112348 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:22:16.115067 ignition[1145]: INFO : Ignition 2.19.0 Mar 14 00:22:16.115067 ignition[1145]: INFO : Stage: umount Mar 14 00:22:16.115067 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:16.115067 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 14 00:22:16.155705 ignition[1145]: INFO : umount: umount passed Mar 14 00:22:16.155705 ignition[1145]: INFO : Ignition finished successfully Mar 14 00:22:16.120085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:22:16.120273 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:16.126157 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:22:16.126306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:16.128865 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:22:16.128963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:22:16.130047 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:22:16.130424 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:22:16.130869 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:22:16.130968 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:22:16.131318 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:22:16.131417 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:22:16.131830 systemd[1]: Stopped target network.target - Network. Mar 14 00:22:16.132309 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:22:16.132415 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:16.132888 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:22:16.133282 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:22:16.212370 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:16.220426 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:22:16.220531 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:22:16.221096 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:22:16.221151 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:16.221667 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:22:16.221701 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:16.222200 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:22:16.222265 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:22:16.222702 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:22:16.222737 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:16.223326 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:22:16.223720 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:22:16.225393 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:22:16.225999 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:22:16.226085 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:22:16.295218 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:22:16.295404 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:22:16.296266 systemd-networkd[894]: eth0: DHCPv6 lease lost Mar 14 00:22:16.302770 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:22:16.302920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:22:16.309376 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:22:16.309452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:16.326980 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:22:16.333171 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:22:16.333268 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:16.336893 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:22:16.336957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:16.341364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:22:16.341423 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:16.346287 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:22:16.346338 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:16.357415 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:16.394311 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:22:16.394719 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:16.403794 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:22:16.403886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:16.408676 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:22:16.408718 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:16.424712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:22:16.424785 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:16.434944 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:22:16.434992 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:16.438062 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:16.438117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:16.456452 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:22:16.477751 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: Data path switched from VF: enP15929s1 Mar 14 00:22:16.459859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:22:16.459922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:16.463581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:16.463641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:16.474034 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:22:16.474138 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:22:16.505825 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:22:16.505942 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:22:21.434180 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:22:21.434340 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:22:21.443713 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:22:21.447583 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:22:21.450877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:21.464385 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:22:22.035511 systemd[1]: Switching root. Mar 14 00:22:22.096390 systemd-journald[177]: Journal stopped Mar 14 00:22:24.424368 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 14 00:22:24.424395 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:22:24.424411 kernel: SELinux: policy capability open_perms=1 Mar 14 00:22:24.424422 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:22:24.424432 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:22:24.424440 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:22:24.424449 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:22:24.424462 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:22:24.424473 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:22:24.424487 kernel: audit: type=1403 audit(1773447742.733:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:22:24.424497 systemd[1]: Successfully loaded SELinux policy in 77.928ms. Mar 14 00:22:24.424507 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.676ms. Mar 14 00:22:24.424521 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:24.424531 systemd[1]: Detected virtualization microsoft. Mar 14 00:22:24.424547 systemd[1]: Detected architecture x86-64. Mar 14 00:22:24.424557 systemd[1]: Detected first boot. Mar 14 00:22:24.424570 systemd[1]: Hostname set to . Mar 14 00:22:24.424580 systemd[1]: Initializing machine ID from random generator. Mar 14 00:22:24.424590 zram_generator::config[1190]: No configuration found. Mar 14 00:22:24.424612 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:22:24.424622 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:22:24.424635 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:22:24.424645 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:22:24.424655 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:22:24.424669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:22:24.424679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:22:24.424695 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:22:24.424705 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:22:24.424717 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:22:24.424729 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:22:24.424740 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:22:24.424754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:24.424766 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:24.424779 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:22:24.424791 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:22:24.424805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:22:24.424816 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:24.424829 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:22:24.424839 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:24.424850 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:22:24.424863 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:22:24.424877 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:24.424888 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:22:24.424904 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:24.424914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:24.424924 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:24.424934 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:24.424949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:22:24.424959 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:22:24.424973 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:24.424986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:24.425000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:24.425012 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:22:24.425023 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:22:24.425037 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:22:24.425050 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:22:24.425061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:24.425075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:22:24.425087 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:22:24.425102 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:22:24.425113 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:22:24.425124 systemd[1]: Reached target machines.target - Containers. Mar 14 00:22:24.425138 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:22:24.425152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:24.425166 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:24.425176 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:22:24.425190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:24.425201 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:24.425213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:24.425232 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:22:24.425245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:24.425260 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:22:24.425276 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:22:24.425286 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:22:24.425301 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:22:24.425311 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:22:24.425326 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:24.425336 kernel: ACPI: bus type drm_connector registered Mar 14 00:22:24.425346 kernel: fuse: init (API version 7.39) Mar 14 00:22:24.425358 kernel: loop: module loaded Mar 14 00:22:24.425371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:24.425400 systemd-journald[1293]: Collecting audit messages is disabled. Mar 14 00:22:24.425426 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:22:24.425437 systemd-journald[1293]: Journal started Mar 14 00:22:24.425465 systemd-journald[1293]: Runtime Journal (/run/log/journal/a3b5111f859742b4a3b5acfe1283100c) is 8.0M, max 158.7M, 150.7M free. Mar 14 00:22:23.763391 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:22:23.794183 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:22:23.794603 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:22:24.443288 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:22:24.461537 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:24.467254 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:22:24.467329 systemd[1]: Stopped verity-setup.service. Mar 14 00:22:24.480243 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:24.486243 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:24.492076 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:22:24.496026 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:22:24.500305 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:22:24.504201 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:22:24.508086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:22:24.512169 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:22:24.515913 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:22:24.520568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:24.525613 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:22:24.525915 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:22:24.530449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:24.530729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:24.535657 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:24.535977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:24.540555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:24.540872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:24.545812 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:22:24.546088 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:22:24.550125 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:24.550512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:24.554583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:24.558824 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:22:24.563399 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:22:24.567622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:24.582060 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:22:24.591335 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:22:24.596331 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:22:24.600805 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:22:24.600953 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:24.605351 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:22:24.610177 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:22:24.615198 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:22:24.618784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:24.650386 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:22:24.655481 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:22:24.662444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:24.665416 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:22:24.672847 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:24.674894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:24.688416 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:22:24.688567 systemd-journald[1293]: Time spent on flushing to /var/log/journal/a3b5111f859742b4a3b5acfe1283100c is 24.903ms for 952 entries. Mar 14 00:22:24.688567 systemd-journald[1293]: System Journal (/var/log/journal/a3b5111f859742b4a3b5acfe1283100c) is 8.0M, max 2.6G, 2.6G free. Mar 14 00:22:24.741302 systemd-journald[1293]: Received client request to flush runtime journal. Mar 14 00:22:24.741357 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:22:24.705401 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:22:24.713441 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:22:24.719377 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:22:24.726708 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:22:24.731562 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:22:24.735719 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:22:24.746957 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:22:24.754359 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:22:24.769780 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:22:24.776194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:24.811733 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:22:24.890536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:22:24.891399 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:22:24.936691 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:22:24.952275 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:22:24.954886 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:24.999459 kernel: loop1: detected capacity change from 0 to 142488 Mar 14 00:22:25.008436 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Mar 14 00:22:25.009637 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Mar 14 00:22:25.018704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:25.146256 kernel: loop2: detected capacity change from 0 to 31056 Mar 14 00:22:25.248249 kernel: loop3: detected capacity change from 0 to 228704 Mar 14 00:22:25.282250 kernel: loop4: detected capacity change from 0 to 140768 Mar 14 00:22:25.343288 kernel: loop5: detected capacity change from 0 to 142488 Mar 14 00:22:25.439082 kernel: loop6: detected capacity change from 0 to 31056 Mar 14 00:22:25.463244 kernel: loop7: detected capacity change from 0 to 228704 Mar 14 00:22:25.482932 (sd-merge)[1351]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 14 00:22:25.485118 (sd-merge)[1351]: Merged extensions into '/usr'. Mar 14 00:22:25.501869 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:22:25.515413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:25.537032 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Mar 14 00:22:25.541838 systemd[1]: Reloading requested from client PID 1327 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:22:25.541857 systemd[1]: Reloading... Mar 14 00:22:25.623478 zram_generator::config[1378]: No configuration found. Mar 14 00:22:25.851337 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:22:25.860399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 14 00:22:25.902267 kernel: hv_vmbus: registering driver hv_balloon Mar 14 00:22:25.908281 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 14 00:22:25.956868 kernel: hv_vmbus: registering driver hyperv_fb Mar 14 00:22:25.996260 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 14 00:22:26.009863 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 14 00:22:26.020874 kernel: Console: switching to colour dummy device 80x25 Mar 14 00:22:26.034245 kernel: Console: switching to colour frame buffer device 128x48 Mar 14 00:22:26.051875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:26.188248 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1400) Mar 14 00:22:26.288472 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 14 00:22:26.308820 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:22:26.311013 systemd[1]: Reloading finished in 768 ms. Mar 14 00:22:26.343289 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:26.357925 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:22:26.422928 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 14 00:22:26.435545 systemd[1]: Starting ensure-sysext.service... Mar 14 00:22:26.441446 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:22:26.451911 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:26.464432 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:26.475350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:26.483393 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:22:26.503683 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:22:26.510799 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:22:26.515068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:22:26.519590 systemd[1]: Reloading requested from client PID 1512 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:22:26.520423 systemd[1]: Reloading... Mar 14 00:22:26.533819 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:22:26.534352 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:22:26.539653 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:22:26.553988 lvm[1520]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:26.540081 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Mar 14 00:22:26.540169 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Mar 14 00:22:26.557596 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:26.558296 systemd-tmpfiles[1515]: Skipping /boot Mar 14 00:22:26.598561 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:26.598710 systemd-tmpfiles[1515]: Skipping /boot Mar 14 00:22:26.673756 zram_generator::config[1548]: No configuration found. Mar 14 00:22:26.788351 systemd-networkd[1514]: lo: Link UP Mar 14 00:22:26.788366 systemd-networkd[1514]: lo: Gained carrier Mar 14 00:22:26.792666 systemd-networkd[1514]: Enumeration completed Mar 14 00:22:26.793191 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:26.793204 systemd-networkd[1514]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:26.847251 kernel: mlx5_core 3e39:00:02.0 enP15929s1: Link up Mar 14 00:22:26.866323 kernel: hv_netvsc 7ced8d49-a7fd-7ced-8d49-a7fd7ced8d49 eth0: Data path switched to VF: enP15929s1 Mar 14 00:22:26.867204 systemd-networkd[1514]: enP15929s1: Link UP Mar 14 00:22:26.867375 systemd-networkd[1514]: eth0: Link UP Mar 14 00:22:26.867380 systemd-networkd[1514]: eth0: Gained carrier Mar 14 00:22:26.867403 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:26.869038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:26.871573 systemd-networkd[1514]: enP15929s1: Gained carrier Mar 14 00:22:26.907309 systemd-networkd[1514]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 14 00:22:26.967183 systemd[1]: Reloading finished in 446 ms. Mar 14 00:22:26.982767 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:22:26.986641 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:26.998736 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:27.003190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:27.007394 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:22:27.020300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:27.029638 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:27.036628 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:22:27.048103 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:22:27.055598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:22:27.064858 lvm[1626]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:27.077348 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:22:27.092588 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:27.100497 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:22:27.108859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:27.109091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:27.117041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:27.125379 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:27.134546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:27.141449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:27.141626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:27.143561 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:22:27.148604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:27.148793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:27.153449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:27.153650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:27.158351 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:27.158525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:27.168584 augenrules[1647]: No rules Mar 14 00:22:27.170834 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:27.175191 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:22:27.192330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:27.193687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:27.201549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:27.208657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:27.213827 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:27.217473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:27.217633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:27.220914 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:22:27.226187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:27.226409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:27.246070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:27.246483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:27.251607 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:27.251781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:27.257160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:27.257487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:27.267081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:27.271339 systemd-resolved[1639]: Positive Trust Anchors: Mar 14 00:22:27.271356 systemd-resolved[1639]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:27.271425 systemd-resolved[1639]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:27.274105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:27.279394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:27.279567 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:27.279784 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:22:27.282697 systemd-resolved[1639]: Using system hostname 'ci-4081.3.6-n-2b39e14e44'. Mar 14 00:22:27.284102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:27.285662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:27.290263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:27.290453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:27.294648 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:27.294815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:27.301056 systemd[1]: Finished ensure-sysext.service. Mar 14 00:22:27.305668 systemd[1]: Reached target network.target - Network. Mar 14 00:22:27.308798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:27.312748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:27.366081 ldconfig[1322]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:22:27.377967 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:22:27.386471 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:22:27.400341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:22:27.453160 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:22:27.457398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:22:27.457458 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:27.460812 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:22:27.464613 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:22:27.468446 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:22:27.472122 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:22:27.475908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:22:27.480248 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:22:27.480296 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:27.483053 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:27.486955 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:22:27.492114 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:22:27.507271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:22:27.511346 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:22:27.514855 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:27.517952 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:27.520857 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:27.520879 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:27.528359 systemd[1]: Starting chronyd.service - NTP client/server... Mar 14 00:22:27.534384 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:22:27.543480 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:22:27.554445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:22:27.568341 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:22:27.573504 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:22:27.577497 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:22:27.577547 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 14 00:22:27.579122 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 14 00:22:27.583003 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 14 00:22:27.586405 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:22:27.597282 jq[1683]: false Mar 14 00:22:27.611873 KVP[1685]: KVP starting; pid is:1685 Mar 14 00:22:27.597411 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:22:27.610840 (chronyd)[1677]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 14 00:22:27.615443 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:22:27.625416 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:22:27.635413 KVP[1685]: KVP LIC Version: 3.1 Mar 14 00:22:27.636262 kernel: hv_utils: KVP IC version 4.0 Mar 14 00:22:27.640499 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:22:27.640777 chronyd[1695]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 14 00:22:27.644644 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:22:27.645721 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:22:27.652434 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:22:27.658394 chronyd[1695]: Timezone right/UTC failed leap second check, ignoring Mar 14 00:22:27.658885 chronyd[1695]: Loaded seccomp filter (level 2) Mar 14 00:22:27.661336 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:22:27.671592 jq[1701]: true Mar 14 00:22:27.672061 systemd[1]: Started chronyd.service - NTP client/server. Mar 14 00:22:27.680259 extend-filesystems[1684]: Found loop4 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found loop5 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found loop6 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found loop7 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda1 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda2 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda3 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found usr Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda4 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda6 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda7 Mar 14 00:22:27.680259 extend-filesystems[1684]: Found sda9 Mar 14 00:22:27.680259 extend-filesystems[1684]: Checking size of /dev/sda9 Mar 14 00:22:27.689613 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:22:27.718944 update_engine[1700]: I20260314 00:22:27.704522 1700 main.cc:92] Flatcar Update Engine starting Mar 14 00:22:27.689820 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:22:27.690156 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:22:27.690355 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:22:27.710662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:22:27.710912 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:22:27.766366 extend-filesystems[1684]: Old size kept for /dev/sda9 Mar 14 00:22:27.769278 extend-filesystems[1684]: Found sr0 Mar 14 00:22:27.773080 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:22:27.774509 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:22:27.777773 jq[1709]: true Mar 14 00:22:27.783694 (ntainerd)[1710]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:22:27.799799 dbus-daemon[1680]: [system] SELinux support is enabled Mar 14 00:22:27.800030 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:22:27.806780 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:22:27.806825 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:22:27.807979 update_engine[1700]: I20260314 00:22:27.807933 1700 update_check_scheduler.cc:74] Next update check in 6m46s Mar 14 00:22:27.813245 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:22:27.813287 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:22:27.815155 systemd-logind[1694]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:22:27.817463 systemd-logind[1694]: New seat seat0. Mar 14 00:22:27.821700 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:22:27.831865 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:22:27.839806 dbus-daemon[1680]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:22:27.843401 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:22:27.852438 tar[1708]: linux-amd64/LICENSE Mar 14 00:22:27.852744 tar[1708]: linux-amd64/helm Mar 14 00:22:27.949877 coreos-metadata[1679]: Mar 14 00:22:27.949 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 14 00:22:27.950684 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:22:27.953532 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:22:27.959630 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:22:27.959948 coreos-metadata[1679]: Mar 14 00:22:27.959 INFO Fetch successful Mar 14 00:22:27.960015 coreos-metadata[1679]: Mar 14 00:22:27.959 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 14 00:22:27.967471 coreos-metadata[1679]: Mar 14 00:22:27.966 INFO Fetch successful Mar 14 00:22:27.967471 coreos-metadata[1679]: Mar 14 00:22:27.966 INFO Fetching http://168.63.129.16/machine/ac0c51c6-c383-4d86-bbc4-f087ee2e2723/e1914805%2D120e%2D4185%2D8af1%2Dff7ddf4baec6.%5Fci%2D4081.3.6%2Dn%2D2b39e14e44?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 14 00:22:27.967960 coreos-metadata[1679]: Mar 14 00:22:27.967 INFO Fetch successful Mar 14 00:22:27.969533 coreos-metadata[1679]: Mar 14 00:22:27.967 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 14 00:22:27.977249 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1413) Mar 14 00:22:27.979459 coreos-metadata[1679]: Mar 14 00:22:27.979 INFO Fetch successful Mar 14 00:22:28.035608 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:22:28.055063 systemd-networkd[1514]: eth0: Gained IPv6LL Mar 14 00:22:28.061356 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:22:28.089720 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:22:28.106438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:28.113523 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:22:28.120762 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:22:28.134729 sshd_keygen[1707]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:22:28.206644 locksmithd[1729]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:22:28.209453 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:22:28.219553 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:22:28.233322 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:22:28.249515 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 14 00:22:28.266851 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:22:28.267108 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:22:28.283367 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:22:28.320548 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:22:28.333567 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:22:28.351545 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:22:28.356841 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:22:28.382463 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 14 00:22:28.730654 tar[1708]: linux-amd64/README.md Mar 14 00:22:28.741837 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:22:29.125303 containerd[1710]: time="2026-03-14T00:22:29.123829000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:22:29.153062 containerd[1710]: time="2026-03-14T00:22:29.153005800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:29.154695 containerd[1710]: time="2026-03-14T00:22:29.154651100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:29.154811 containerd[1710]: time="2026-03-14T00:22:29.154696200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:22:29.154811 containerd[1710]: time="2026-03-14T00:22:29.154720600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:22:29.154932 containerd[1710]: time="2026-03-14T00:22:29.154905400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:22:29.154978 containerd[1710]: time="2026-03-14T00:22:29.154943400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155055 containerd[1710]: time="2026-03-14T00:22:29.155032000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155110 containerd[1710]: time="2026-03-14T00:22:29.155051900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155306 containerd[1710]: time="2026-03-14T00:22:29.155278300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155306 containerd[1710]: time="2026-03-14T00:22:29.155303400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155413 containerd[1710]: time="2026-03-14T00:22:29.155320700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155413 containerd[1710]: time="2026-03-14T00:22:29.155333300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155497 containerd[1710]: time="2026-03-14T00:22:29.155477400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155725 containerd[1710]: time="2026-03-14T00:22:29.155697500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155879 containerd[1710]: time="2026-03-14T00:22:29.155853200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:29.155927 containerd[1710]: time="2026-03-14T00:22:29.155876100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:22:29.155998 containerd[1710]: time="2026-03-14T00:22:29.155977800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:22:29.156060 containerd[1710]: time="2026-03-14T00:22:29.156040300Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:22:29.203009 containerd[1710]: time="2026-03-14T00:22:29.202930000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203215200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203276400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203302300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203322000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203513400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203829800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203948100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203969400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.203989400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.204008900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.204028200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.204045200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.204064700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204251 containerd[1710]: time="2026-03-14T00:22:29.204084000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204767 containerd[1710]: time="2026-03-14T00:22:29.204101800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204767 containerd[1710]: time="2026-03-14T00:22:29.204119800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204767 containerd[1710]: time="2026-03-14T00:22:29.204137600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:22:29.204767 containerd[1710]: time="2026-03-14T00:22:29.204165000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.204767 containerd[1710]: time="2026-03-14T00:22:29.204183500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.204975 containerd[1710]: time="2026-03-14T00:22:29.204215500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.205067 containerd[1710]: time="2026-03-14T00:22:29.205047500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.205137 containerd[1710]: time="2026-03-14T00:22:29.205123900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.205209 containerd[1710]: time="2026-03-14T00:22:29.205196400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.205326 containerd[1710]: time="2026-03-14T00:22:29.205308500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205391900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205417100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205440400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205456900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205473100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205490800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205520400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205551100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205567600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205593500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205675800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205704200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:22:29.206184 containerd[1710]: time="2026-03-14T00:22:29.205720000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:22:29.206687 containerd[1710]: time="2026-03-14T00:22:29.205754600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:22:29.206687 containerd[1710]: time="2026-03-14T00:22:29.205770600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206687 containerd[1710]: time="2026-03-14T00:22:29.205788400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:22:29.206687 containerd[1710]: time="2026-03-14T00:22:29.205806800Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:22:29.206687 containerd[1710]: time="2026-03-14T00:22:29.205820600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:22:29.206911 containerd[1710]: time="2026-03-14T00:22:29.206502000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:22:29.206911 containerd[1710]: time="2026-03-14T00:22:29.206654200Z" level=info msg="Connect containerd service" Mar 14 00:22:29.206911 containerd[1710]: time="2026-03-14T00:22:29.206719800Z" level=info msg="using legacy CRI server" Mar 14 00:22:29.206911 containerd[1710]: time="2026-03-14T00:22:29.206732700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:22:29.207204 containerd[1710]: time="2026-03-14T00:22:29.206951100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:22:29.207997 containerd[1710]: time="2026-03-14T00:22:29.207963800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:22:29.209788 containerd[1710]: time="2026-03-14T00:22:29.209192800Z" level=info msg="Start subscribing containerd event" Mar 14 00:22:29.209788 containerd[1710]: time="2026-03-14T00:22:29.209299900Z" level=info msg="Start recovering state" Mar 14 00:22:29.209788 containerd[1710]: time="2026-03-14T00:22:29.209395100Z" level=info msg="Start event monitor" Mar 14 00:22:29.209788 containerd[1710]: time="2026-03-14T00:22:29.209411200Z" level=info msg="Start snapshots syncer" Mar 14 00:22:29.209788 containerd[1710]: time="2026-03-14T00:22:29.209425400Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:22:29.209788 containerd[1710]: time="2026-03-14T00:22:29.209442700Z" level=info msg="Start streaming server" Mar 14 00:22:29.210035 containerd[1710]: time="2026-03-14T00:22:29.209925900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:22:29.210824 containerd[1710]: time="2026-03-14T00:22:29.210269200Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:22:29.210933 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:22:29.214316 containerd[1710]: time="2026-03-14T00:22:29.214292500Z" level=info msg="containerd successfully booted in 0.091698s" Mar 14 00:22:29.502211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:29.506853 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:22:29.511878 systemd[1]: Startup finished in 1.067s (kernel) + 13.666s (initrd) + 6.854s (userspace) = 21.587s. Mar 14 00:22:29.524472 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:22:29.716761 login[1811]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 14 00:22:29.718685 login[1812]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 14 00:22:29.730915 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:22:29.737506 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:22:29.742438 systemd-logind[1694]: New session 2 of user core. Mar 14 00:22:29.749490 systemd-logind[1694]: New session 1 of user core. Mar 14 00:22:29.757880 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:22:29.767563 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:22:29.775256 (systemd)[1843]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:22:29.925423 systemd[1843]: Queued start job for default target default.target. Mar 14 00:22:29.931840 systemd[1843]: Created slice app.slice - User Application Slice. Mar 14 00:22:29.931882 systemd[1843]: Reached target paths.target - Paths. Mar 14 00:22:29.931900 systemd[1843]: Reached target timers.target - Timers. Mar 14 00:22:29.933496 systemd[1843]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:22:29.948025 systemd[1843]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:22:29.948090 systemd[1843]: Reached target sockets.target - Sockets. Mar 14 00:22:29.948107 systemd[1843]: Reached target basic.target - Basic System. Mar 14 00:22:29.948151 systemd[1843]: Reached target default.target - Main User Target. Mar 14 00:22:29.948185 systemd[1843]: Startup finished in 166ms. Mar 14 00:22:29.948510 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:22:29.953403 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:22:29.954969 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:22:30.256089 kubelet[1832]: E0314 00:22:30.256029 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:22:30.258841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:22:30.259046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:22:31.142652 waagent[1815]: 2026-03-14T00:22:31.142549Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.143089Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.144883Z INFO Daemon Daemon Python: 3.11.9 Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.146100Z INFO Daemon Daemon Run daemon Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.147243Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.147689Z INFO Daemon Daemon Using waagent for provisioning Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.148337Z INFO Daemon Daemon Activate resource disk Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.148770Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.153655Z INFO Daemon Daemon Found device: None Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.153808Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.154926Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.157587Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 14 00:22:31.164254 waagent[1815]: 2026-03-14T00:22:31.158347Z INFO Daemon Daemon Running default provisioning handler Mar 14 00:22:31.192250 waagent[1815]: 2026-03-14T00:22:31.192166Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 14 00:22:31.201052 waagent[1815]: 2026-03-14T00:22:31.200991Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 14 00:22:31.212021 waagent[1815]: 2026-03-14T00:22:31.201206Z INFO Daemon Daemon cloud-init is enabled: False Mar 14 00:22:31.212021 waagent[1815]: 2026-03-14T00:22:31.202042Z INFO Daemon Daemon Copying ovf-env.xml Mar 14 00:22:31.313763 waagent[1815]: 2026-03-14T00:22:31.313442Z INFO Daemon Daemon Successfully mounted dvd Mar 14 00:22:31.358452 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 14 00:22:31.360002 waagent[1815]: 2026-03-14T00:22:31.359928Z INFO Daemon Daemon Detect protocol endpoint Mar 14 00:22:31.367034 waagent[1815]: 2026-03-14T00:22:31.360296Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 14 00:22:31.367034 waagent[1815]: 2026-03-14T00:22:31.360914Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 14 00:22:31.367034 waagent[1815]: 2026-03-14T00:22:31.361858Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 14 00:22:31.367034 waagent[1815]: 2026-03-14T00:22:31.363161Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 14 00:22:31.367034 waagent[1815]: 2026-03-14T00:22:31.364021Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 14 00:22:31.406505 waagent[1815]: 2026-03-14T00:22:31.406381Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 14 00:22:31.410255 waagent[1815]: 2026-03-14T00:22:31.406848Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 14 00:22:31.410255 waagent[1815]: 2026-03-14T00:22:31.407912Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 14 00:22:31.580681 waagent[1815]: 2026-03-14T00:22:31.580571Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 14 00:22:31.584735 waagent[1815]: 2026-03-14T00:22:31.584665Z INFO Daemon Daemon Forcing an update of the goal state. Mar 14 00:22:31.590970 waagent[1815]: 2026-03-14T00:22:31.590917Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 14 00:22:31.609399 waagent[1815]: 2026-03-14T00:22:31.609347Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 14 00:22:31.632016 waagent[1815]: 2026-03-14T00:22:31.609982Z INFO Daemon Mar 14 00:22:31.632016 waagent[1815]: 2026-03-14T00:22:31.610875Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b75f23c7-39f2-4f5b-9f5b-fe9a77a31d63 eTag: 2521262468998301887 source: Fabric] Mar 14 00:22:31.632016 waagent[1815]: 2026-03-14T00:22:31.612216Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 14 00:22:31.632016 waagent[1815]: 2026-03-14T00:22:31.613005Z INFO Daemon Mar 14 00:22:31.632016 waagent[1815]: 2026-03-14T00:22:31.614379Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 14 00:22:31.632016 waagent[1815]: 2026-03-14T00:22:31.618520Z INFO Daemon Daemon Downloading artifacts profile blob Mar 14 00:22:31.761500 waagent[1815]: 2026-03-14T00:22:31.761414Z INFO Daemon Downloaded certificate {'thumbprint': '952AEFB38DED182B8E7CB0419251FF927AC1A31E', 'hasPrivateKey': True} Mar 14 00:22:31.767739 waagent[1815]: 2026-03-14T00:22:31.767672Z INFO Daemon Fetch goal state completed Mar 14 00:22:31.775767 waagent[1815]: 2026-03-14T00:22:31.775720Z INFO Daemon Daemon Starting provisioning Mar 14 00:22:31.793312 waagent[1815]: 2026-03-14T00:22:31.778493Z INFO Daemon Daemon Handle ovf-env.xml. Mar 14 00:22:31.793312 waagent[1815]: 2026-03-14T00:22:31.778822Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-2b39e14e44] Mar 14 00:22:31.793312 waagent[1815]: 2026-03-14T00:22:31.781539Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-2b39e14e44] Mar 14 00:22:31.793312 waagent[1815]: 2026-03-14T00:22:31.782772Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 14 00:22:31.793312 waagent[1815]: 2026-03-14T00:22:31.783338Z INFO Daemon Daemon Primary interface is [eth0] Mar 14 00:22:31.896487 systemd-networkd[1514]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:31.896496 systemd-networkd[1514]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:31.896552 systemd-networkd[1514]: eth0: DHCP lease lost Mar 14 00:22:31.898023 waagent[1815]: 2026-03-14T00:22:31.897939Z INFO Daemon Daemon Create user account if not exists Mar 14 00:22:31.917621 waagent[1815]: 2026-03-14T00:22:31.898333Z INFO Daemon Daemon User core already exists, skip useradd Mar 14 00:22:31.917621 waagent[1815]: 2026-03-14T00:22:31.899420Z INFO Daemon Daemon Configure sudoer Mar 14 00:22:31.917621 waagent[1815]: 2026-03-14T00:22:31.900206Z INFO Daemon Daemon Configure sshd Mar 14 00:22:31.917621 waagent[1815]: 2026-03-14T00:22:31.900673Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 14 00:22:31.917621 waagent[1815]: 2026-03-14T00:22:31.901447Z INFO Daemon Daemon Deploy ssh public key. Mar 14 00:22:31.917914 systemd-networkd[1514]: eth0: DHCPv6 lease lost Mar 14 00:22:31.960274 systemd-networkd[1514]: eth0: DHCPv4 address 10.200.8.29/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 14 00:22:40.433390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:22:40.438781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:41.198338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:41.203040 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:22:41.492812 kubelet[1903]: E0314 00:22:41.492690 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:22:41.496654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:22:41.496861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:22:51.468076 chronyd[1695]: Selected source PHC0 Mar 14 00:22:51.683687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:22:51.691439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:51.801069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:51.816605 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:22:52.453259 kubelet[1918]: E0314 00:22:52.453148 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:22:52.455733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:22:52.455942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:01.974132 waagent[1815]: 2026-03-14T00:23:01.974060Z INFO Daemon Daemon Provisioning complete Mar 14 00:23:01.984488 waagent[1815]: 2026-03-14T00:23:01.984434Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 14 00:23:01.993748 waagent[1815]: 2026-03-14T00:23:01.984738Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 14 00:23:01.993748 waagent[1815]: 2026-03-14T00:23:01.985901Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 14 00:23:02.112941 waagent[1925]: 2026-03-14T00:23:02.112825Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 14 00:23:02.113396 waagent[1925]: 2026-03-14T00:23:02.113009Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 14 00:23:02.113396 waagent[1925]: 2026-03-14T00:23:02.113096Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 14 00:23:02.251788 waagent[1925]: 2026-03-14T00:23:02.251613Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 14 00:23:02.251983 waagent[1925]: 2026-03-14T00:23:02.251927Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 14 00:23:02.252092 waagent[1925]: 2026-03-14T00:23:02.252048Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 14 00:23:02.259015 waagent[1925]: 2026-03-14T00:23:02.258946Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 14 00:23:02.268914 waagent[1925]: 2026-03-14T00:23:02.268849Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 14 00:23:02.269471 waagent[1925]: 2026-03-14T00:23:02.269411Z INFO ExtHandler Mar 14 00:23:02.269570 waagent[1925]: 2026-03-14T00:23:02.269517Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9c26b09b-eb31-43c1-a6ab-ecc4cf16b97e eTag: 2521262468998301887 source: Fabric] Mar 14 00:23:02.269893 waagent[1925]: 2026-03-14T00:23:02.269841Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 14 00:23:02.270596 waagent[1925]: 2026-03-14T00:23:02.270536Z INFO ExtHandler Mar 14 00:23:02.270669 waagent[1925]: 2026-03-14T00:23:02.270638Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 14 00:23:02.273948 waagent[1925]: 2026-03-14T00:23:02.273907Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 14 00:23:02.334995 waagent[1925]: 2026-03-14T00:23:02.334900Z INFO ExtHandler Downloaded certificate {'thumbprint': '952AEFB38DED182B8E7CB0419251FF927AC1A31E', 'hasPrivateKey': True} Mar 14 00:23:02.335565 waagent[1925]: 2026-03-14T00:23:02.335503Z INFO ExtHandler Fetch goal state completed Mar 14 00:23:02.348167 waagent[1925]: 2026-03-14T00:23:02.348100Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1925 Mar 14 00:23:02.348365 waagent[1925]: 2026-03-14T00:23:02.348315Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 14 00:23:02.350011 waagent[1925]: 2026-03-14T00:23:02.349948Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 14 00:23:02.350412 waagent[1925]: 2026-03-14T00:23:02.350364Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 14 00:23:02.364111 waagent[1925]: 2026-03-14T00:23:02.364062Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 14 00:23:02.364377 waagent[1925]: 2026-03-14T00:23:02.364325Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 14 00:23:02.371217 waagent[1925]: 2026-03-14T00:23:02.371173Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 14 00:23:02.378670 systemd[1]: Reloading requested from client PID 1938 ('systemctl') (unit waagent.service)... Mar 14 00:23:02.378688 systemd[1]: Reloading... Mar 14 00:23:02.478323 zram_generator::config[1973]: No configuration found. Mar 14 00:23:02.595794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:02.677729 systemd[1]: Reloading finished in 298 ms. Mar 14 00:23:02.702399 waagent[1925]: 2026-03-14T00:23:02.701211Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 14 00:23:02.706914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:23:02.714434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:02.719105 systemd[1]: Reloading requested from client PID 2029 ('systemctl') (unit waagent.service)... Mar 14 00:23:02.719142 systemd[1]: Reloading... Mar 14 00:23:02.840994 zram_generator::config[2062]: No configuration found. Mar 14 00:23:02.985156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:03.068170 systemd[1]: Reloading finished in 348 ms. Mar 14 00:23:03.097257 waagent[1925]: 2026-03-14T00:23:03.095451Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 14 00:23:03.097257 waagent[1925]: 2026-03-14T00:23:03.095689Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 14 00:23:06.991596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:06.996733 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:07.112663 kubelet[2133]: E0314 00:23:07.112615 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:07.115264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:07.115789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:07.184737 waagent[1925]: 2026-03-14T00:23:07.184645Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 14 00:23:07.185451 waagent[1925]: 2026-03-14T00:23:07.185385Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 14 00:23:07.186258 waagent[1925]: 2026-03-14T00:23:07.186192Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 14 00:23:07.186690 waagent[1925]: 2026-03-14T00:23:07.186628Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 14 00:23:07.187147 waagent[1925]: 2026-03-14T00:23:07.187090Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 14 00:23:07.187318 waagent[1925]: 2026-03-14T00:23:07.187248Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 14 00:23:07.187386 waagent[1925]: 2026-03-14T00:23:07.187349Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 14 00:23:07.187704 waagent[1925]: 2026-03-14T00:23:07.187651Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 14 00:23:07.187824 waagent[1925]: 2026-03-14T00:23:07.187785Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 14 00:23:07.187916 waagent[1925]: 2026-03-14T00:23:07.187878Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 14 00:23:07.188149 waagent[1925]: 2026-03-14T00:23:07.188104Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 14 00:23:07.188435 waagent[1925]: 2026-03-14T00:23:07.188350Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 14 00:23:07.188612 waagent[1925]: 2026-03-14T00:23:07.188568Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 14 00:23:07.188612 waagent[1925]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 14 00:23:07.188612 waagent[1925]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 14 00:23:07.188612 waagent[1925]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 14 00:23:07.188612 waagent[1925]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 14 00:23:07.188612 waagent[1925]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 14 00:23:07.188612 waagent[1925]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 14 00:23:07.189473 waagent[1925]: 2026-03-14T00:23:07.189424Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 14 00:23:07.190181 waagent[1925]: 2026-03-14T00:23:07.190136Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 14 00:23:07.191571 waagent[1925]: 2026-03-14T00:23:07.191517Z INFO EnvHandler ExtHandler Configure routes Mar 14 00:23:07.192489 waagent[1925]: 2026-03-14T00:23:07.192442Z INFO EnvHandler ExtHandler Gateway:None Mar 14 00:23:07.192565 waagent[1925]: 2026-03-14T00:23:07.192523Z INFO EnvHandler ExtHandler Routes:None Mar 14 00:23:07.197848 waagent[1925]: 2026-03-14T00:23:07.196725Z INFO ExtHandler ExtHandler Mar 14 00:23:07.200213 waagent[1925]: 2026-03-14T00:23:07.198441Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 97acb983-4ed1-4adf-9a1c-f0ea6e0bf22d correlation fe3c7b5a-801a-4b29-b649-1d891473b6db created: 2026-03-14T00:21:51.616013Z] Mar 14 00:23:07.200213 waagent[1925]: 2026-03-14T00:23:07.198934Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 14 00:23:07.200213 waagent[1925]: 2026-03-14T00:23:07.199676Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 14 00:23:07.211835 waagent[1925]: 2026-03-14T00:23:07.211772Z INFO MonitorHandler ExtHandler Network interfaces: Mar 14 00:23:07.211835 waagent[1925]: Executing ['ip', '-a', '-o', 'link']: Mar 14 00:23:07.211835 waagent[1925]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 14 00:23:07.211835 waagent[1925]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:49:a7:fd brd ff:ff:ff:ff:ff:ff Mar 14 00:23:07.211835 waagent[1925]: 3: enP15929s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:49:a7:fd brd ff:ff:ff:ff:ff:ff\ altname enP15929p0s2 Mar 14 00:23:07.211835 waagent[1925]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 14 00:23:07.211835 waagent[1925]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 14 00:23:07.211835 waagent[1925]: 2: eth0 inet 10.200.8.29/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 14 00:23:07.211835 waagent[1925]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 14 00:23:07.211835 waagent[1925]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 14 00:23:07.211835 waagent[1925]: 2: eth0 inet6 fe80::7eed:8dff:fe49:a7fd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 14 00:23:07.241417 waagent[1925]: 2026-03-14T00:23:07.240336Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1FB52895-A76A-4E79-99D4-0C18AD7533C5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 14 00:23:07.455291 waagent[1925]: 2026-03-14T00:23:07.455178Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 14 00:23:07.455291 waagent[1925]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 14 00:23:07.455291 waagent[1925]: pkts bytes target prot opt in out source destination Mar 14 00:23:07.455291 waagent[1925]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 14 00:23:07.455291 waagent[1925]: pkts bytes target prot opt in out source destination Mar 14 00:23:07.455291 waagent[1925]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 14 00:23:07.455291 waagent[1925]: pkts bytes target prot opt in out source destination Mar 14 00:23:07.455291 waagent[1925]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 14 00:23:07.455291 waagent[1925]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 14 00:23:07.455291 waagent[1925]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 14 00:23:07.458675 waagent[1925]: 2026-03-14T00:23:07.458613Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 14 00:23:07.458675 waagent[1925]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 14 00:23:07.458675 waagent[1925]: pkts bytes target prot opt in out source destination Mar 14 00:23:07.458675 waagent[1925]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 14 00:23:07.458675 waagent[1925]: pkts bytes target prot opt in out source destination Mar 14 00:23:07.458675 waagent[1925]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 14 00:23:07.458675 waagent[1925]: pkts bytes target prot opt in out source destination Mar 14 00:23:07.458675 waagent[1925]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 14 00:23:07.458675 waagent[1925]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 14 00:23:07.458675 waagent[1925]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 14 00:23:07.459059 waagent[1925]: 2026-03-14T00:23:07.458944Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 14 00:23:07.667699 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:23:07.672530 systemd[1]: Started sshd@0-10.200.8.29:22-10.200.16.10:45526.service - OpenSSH per-connection server daemon (10.200.16.10:45526). Mar 14 00:23:08.366261 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 45526 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:23:08.367002 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:08.372092 systemd-logind[1694]: New session 3 of user core. Mar 14 00:23:08.379436 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:23:08.912124 systemd[1]: Started sshd@1-10.200.8.29:22-10.200.16.10:45538.service - OpenSSH per-connection server daemon (10.200.16.10:45538). Mar 14 00:23:09.543341 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 45538 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:23:09.544877 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:09.549975 systemd-logind[1694]: New session 4 of user core. Mar 14 00:23:09.557460 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:23:09.988195 sshd[2176]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:09.992295 systemd-logind[1694]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:23:09.992675 systemd[1]: sshd@1-10.200.8.29:22-10.200.16.10:45538.service: Deactivated successfully. Mar 14 00:23:09.994489 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:23:09.995391 systemd-logind[1694]: Removed session 4. Mar 14 00:23:10.097128 systemd[1]: Started sshd@2-10.200.8.29:22-10.200.16.10:33478.service - OpenSSH per-connection server daemon (10.200.16.10:33478). Mar 14 00:23:10.717555 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 33478 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:23:10.719101 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:10.723735 systemd-logind[1694]: New session 5 of user core. Mar 14 00:23:10.733402 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:23:11.154998 sshd[2183]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:11.159723 systemd-logind[1694]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:23:11.160468 systemd[1]: sshd@2-10.200.8.29:22-10.200.16.10:33478.service: Deactivated successfully. Mar 14 00:23:11.162505 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:23:11.163389 systemd-logind[1694]: Removed session 5. Mar 14 00:23:11.265270 systemd[1]: Started sshd@3-10.200.8.29:22-10.200.16.10:33484.service - OpenSSH per-connection server daemon (10.200.16.10:33484). Mar 14 00:23:11.885764 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 33484 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:23:11.887305 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:11.891305 systemd-logind[1694]: New session 6 of user core. Mar 14 00:23:11.901399 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:23:12.328070 sshd[2190]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:12.331394 systemd[1]: sshd@3-10.200.8.29:22-10.200.16.10:33484.service: Deactivated successfully. Mar 14 00:23:12.333605 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:23:12.335008 systemd-logind[1694]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:23:12.336051 systemd-logind[1694]: Removed session 6. Mar 14 00:23:12.438139 systemd[1]: Started sshd@4-10.200.8.29:22-10.200.16.10:33488.service - OpenSSH per-connection server daemon (10.200.16.10:33488). Mar 14 00:23:12.925202 update_engine[1700]: I20260314 00:23:12.925086 1700 update_attempter.cc:509] Updating boot flags... Mar 14 00:23:12.997315 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2211) Mar 14 00:23:13.064865 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 33488 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:23:13.070794 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:13.098258 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2215) Mar 14 00:23:13.110340 systemd-logind[1694]: New session 7 of user core. Mar 14 00:23:13.114242 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:23:13.207261 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2215) Mar 14 00:23:13.464658 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:23:13.465030 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:13.483609 sudo[2293]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:13.584395 sshd[2197]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:13.587846 systemd[1]: sshd@4-10.200.8.29:22-10.200.16.10:33488.service: Deactivated successfully. Mar 14 00:23:13.589815 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:23:13.591433 systemd-logind[1694]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:23:13.592436 systemd-logind[1694]: Removed session 7. Mar 14 00:23:13.694324 systemd[1]: Started sshd@5-10.200.8.29:22-10.200.16.10:33494.service - OpenSSH per-connection server daemon (10.200.16.10:33494). Mar 14 00:23:14.051300 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 14 00:23:14.317666 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 33494 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:23:14.319198 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:14.323938 systemd-logind[1694]: New session 8 of user core. Mar 14 00:23:14.335369 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:23:14.661057 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:23:14.661447 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:14.664948 sudo[2302]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:14.670036 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:23:14.670401 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:14.689576 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:23:14.691312 auditctl[2305]: No rules Mar 14 00:23:14.692595 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:23:14.692836 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:23:14.694870 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:23:14.722026 augenrules[2323]: No rules Mar 14 00:23:14.723508 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:23:14.725589 sudo[2301]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:14.825566 sshd[2298]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:14.828449 systemd[1]: sshd@5-10.200.8.29:22-10.200.16.10:33494.service: Deactivated successfully. Mar 14 00:23:14.830462 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:23:14.831876 systemd-logind[1694]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:23:14.833013 systemd-logind[1694]: Removed session 8. Mar 14 00:23:14.936364 systemd[1]: Started sshd@6-10.200.8.29:22-10.200.16.10:33504.service - OpenSSH per-connection server daemon (10.200.16.10:33504). Mar 14 00:23:15.558804 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 33504 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:23:15.560326 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:15.565294 systemd-logind[1694]: New session 9 of user core. Mar 14 00:23:15.571402 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:23:15.902828 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:23:15.903197 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:17.183402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:23:17.189474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:17.380880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:17.391549 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:17.425615 kubelet[2353]: E0314 00:23:17.425560 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:17.428008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:17.428244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:23.437538 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:23:23.439174 (dockerd)[2365]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:23:25.884154 dockerd[2365]: time="2026-03-14T00:23:25.884084616Z" level=info msg="Starting up" Mar 14 00:23:27.433315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:23:27.439473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:27.648786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:27.662553 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:28.304488 kubelet[2392]: E0314 00:23:28.304387 2392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:28.307023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:28.307258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:32.339710 dockerd[2365]: time="2026-03-14T00:23:32.339141475Z" level=info msg="Loading containers: start." Mar 14 00:23:32.806258 kernel: Initializing XFRM netlink socket Mar 14 00:23:32.933165 systemd-networkd[1514]: docker0: Link UP Mar 14 00:23:33.256612 dockerd[2365]: time="2026-03-14T00:23:33.256561724Z" level=info msg="Loading containers: done." Mar 14 00:23:34.032422 dockerd[2365]: time="2026-03-14T00:23:34.032357157Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:23:34.032950 dockerd[2365]: time="2026-03-14T00:23:34.032510659Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:23:34.032950 dockerd[2365]: time="2026-03-14T00:23:34.032671662Z" level=info msg="Daemon has completed initialization" Mar 14 00:23:34.301878 dockerd[2365]: time="2026-03-14T00:23:34.300357648Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:23:34.300603 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:23:34.865686 containerd[1710]: time="2026-03-14T00:23:34.865636289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:23:35.635796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862175168.mount: Deactivated successfully. Mar 14 00:23:38.433742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 14 00:23:38.440431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:38.599617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:38.606271 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:39.254105 kubelet[2581]: E0314 00:23:39.254044 2581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:39.256532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:39.256755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:42.670074 containerd[1710]: time="2026-03-14T00:23:42.670015040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:42.675582 containerd[1710]: time="2026-03-14T00:23:42.675512959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 14 00:23:42.678100 containerd[1710]: time="2026-03-14T00:23:42.678041813Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:42.682710 containerd[1710]: time="2026-03-14T00:23:42.682645512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:42.683941 containerd[1710]: time="2026-03-14T00:23:42.683716535Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 7.818035745s" Mar 14 00:23:42.683941 containerd[1710]: time="2026-03-14T00:23:42.683760236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 14 00:23:42.684738 containerd[1710]: time="2026-03-14T00:23:42.684714257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:23:47.461545 containerd[1710]: time="2026-03-14T00:23:47.461488401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:47.464136 containerd[1710]: time="2026-03-14T00:23:47.464078739Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 14 00:23:47.467371 containerd[1710]: time="2026-03-14T00:23:47.467319686Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:47.483101 containerd[1710]: time="2026-03-14T00:23:47.482855313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:47.484130 containerd[1710]: time="2026-03-14T00:23:47.483985430Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 4.799215172s" Mar 14 00:23:47.484130 containerd[1710]: time="2026-03-14T00:23:47.484027130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 14 00:23:47.484959 containerd[1710]: time="2026-03-14T00:23:47.484675140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:23:49.433472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 14 00:23:49.439470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:49.555189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:49.567570 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:49.603116 kubelet[2600]: E0314 00:23:49.603046 2600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:49.605374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:49.605589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:56.868808 containerd[1710]: time="2026-03-14T00:23:56.868684177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:56.874258 containerd[1710]: time="2026-03-14T00:23:56.874193761Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 14 00:23:56.877781 containerd[1710]: time="2026-03-14T00:23:56.877720915Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:56.883036 containerd[1710]: time="2026-03-14T00:23:56.882892094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:56.884073 containerd[1710]: time="2026-03-14T00:23:56.883905709Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 9.399196569s" Mar 14 00:23:56.884073 containerd[1710]: time="2026-03-14T00:23:56.883944310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 14 00:23:56.884906 containerd[1710]: time="2026-03-14T00:23:56.884877324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:23:57.985843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204316934.mount: Deactivated successfully. Mar 14 00:23:58.548358 containerd[1710]: time="2026-03-14T00:23:58.548302732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:58.554160 containerd[1710]: time="2026-03-14T00:23:58.553978418Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 14 00:23:58.557348 containerd[1710]: time="2026-03-14T00:23:58.557307369Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:58.563331 containerd[1710]: time="2026-03-14T00:23:58.563274459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:58.564294 containerd[1710]: time="2026-03-14T00:23:58.563879469Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.678960544s" Mar 14 00:23:58.564294 containerd[1710]: time="2026-03-14T00:23:58.563920469Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 14 00:23:58.564745 containerd[1710]: time="2026-03-14T00:23:58.564719481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:23:59.199649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3333883476.mount: Deactivated successfully. Mar 14 00:23:59.683908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 14 00:23:59.693569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:59.844389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:59.856862 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:59.900947 kubelet[2654]: E0314 00:23:59.900889 2654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:59.903652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:59.903887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:24:01.275854 containerd[1710]: time="2026-03-14T00:24:01.275797228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:01.279445 containerd[1710]: time="2026-03-14T00:24:01.279208680Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 14 00:24:01.284551 containerd[1710]: time="2026-03-14T00:24:01.284251357Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:01.289453 containerd[1710]: time="2026-03-14T00:24:01.289382835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:01.290571 containerd[1710]: time="2026-03-14T00:24:01.290527852Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.72577547s" Mar 14 00:24:01.290664 containerd[1710]: time="2026-03-14T00:24:01.290575853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 14 00:24:01.291416 containerd[1710]: time="2026-03-14T00:24:01.291073261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:24:01.899556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155128864.mount: Deactivated successfully. Mar 14 00:24:01.918173 containerd[1710]: time="2026-03-14T00:24:01.918118401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:01.920870 containerd[1710]: time="2026-03-14T00:24:01.920808042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 14 00:24:01.923983 containerd[1710]: time="2026-03-14T00:24:01.923930489Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:01.928338 containerd[1710]: time="2026-03-14T00:24:01.928282955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:01.929585 containerd[1710]: time="2026-03-14T00:24:01.929018467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 637.909105ms" Mar 14 00:24:01.929585 containerd[1710]: time="2026-03-14T00:24:01.929058567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 14 00:24:01.929863 containerd[1710]: time="2026-03-14T00:24:01.929835379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:24:02.580829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938168287.mount: Deactivated successfully. Mar 14 00:24:04.133524 containerd[1710]: time="2026-03-14T00:24:04.133457376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:04.136860 containerd[1710]: time="2026-03-14T00:24:04.136644024Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 14 00:24:04.140434 containerd[1710]: time="2026-03-14T00:24:04.140045976Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:04.145349 containerd[1710]: time="2026-03-14T00:24:04.145309456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:04.146495 containerd[1710]: time="2026-03-14T00:24:04.146455473Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.216586994s" Mar 14 00:24:04.146636 containerd[1710]: time="2026-03-14T00:24:04.146613776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 14 00:24:07.526111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:07.532538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:07.564296 systemd[1]: Reloading requested from client PID 2781 ('systemctl') (unit session-9.scope)... Mar 14 00:24:07.564315 systemd[1]: Reloading... Mar 14 00:24:07.684364 zram_generator::config[2824]: No configuration found. Mar 14 00:24:07.817355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:24:07.912392 systemd[1]: Reloading finished in 347 ms. Mar 14 00:24:07.964956 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:24:07.965058 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:24:07.965451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:07.970594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:08.318430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:08.325044 (kubelet)[2891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:24:08.359836 kubelet[2891]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:24:08.359836 kubelet[2891]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:24:08.359836 kubelet[2891]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:24:08.360346 kubelet[2891]: I0314 00:24:08.359893 2891 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:24:08.646501 kubelet[2891]: I0314 00:24:08.646385 2891 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:24:08.646501 kubelet[2891]: I0314 00:24:08.646416 2891 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:24:08.647046 kubelet[2891]: I0314 00:24:08.646702 2891 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:24:09.084715 kubelet[2891]: I0314 00:24:09.084155 2891 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:24:09.085553 kubelet[2891]: E0314 00:24:09.085289 2891 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:24:09.092794 kubelet[2891]: E0314 00:24:09.092753 2891 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:24:09.092794 kubelet[2891]: I0314 00:24:09.092785 2891 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:24:09.096392 kubelet[2891]: I0314 00:24:09.096371 2891 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:24:09.097191 kubelet[2891]: I0314 00:24:09.097148 2891 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:24:09.097385 kubelet[2891]: I0314 00:24:09.097190 2891 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2b39e14e44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:24:09.097536 kubelet[2891]: I0314 00:24:09.097391 2891 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:24:09.097536 kubelet[2891]: I0314 00:24:09.097406 2891 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:24:09.097619 kubelet[2891]: I0314 00:24:09.097555 2891 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:24:09.101631 kubelet[2891]: I0314 00:24:09.101608 2891 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:24:09.101717 kubelet[2891]: I0314 00:24:09.101633 2891 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:24:09.101717 kubelet[2891]: I0314 00:24:09.101663 2891 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:24:09.101717 kubelet[2891]: I0314 00:24:09.101688 2891 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:24:09.108145 kubelet[2891]: E0314 00:24:09.107883 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:24:09.108145 kubelet[2891]: E0314 00:24:09.108005 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2b39e14e44&limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:24:09.109018 kubelet[2891]: I0314 00:24:09.108189 2891 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:24:09.109018 kubelet[2891]: I0314 00:24:09.108806 2891 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:24:09.110602 kubelet[2891]: W0314 00:24:09.109638 2891 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:24:09.114303 kubelet[2891]: I0314 00:24:09.114241 2891 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:24:09.114303 kubelet[2891]: I0314 00:24:09.114295 2891 server.go:1289] "Started kubelet" Mar 14 00:24:09.118699 kubelet[2891]: I0314 00:24:09.118284 2891 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:24:09.119985 kubelet[2891]: E0314 00:24:09.118105 2891 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2b39e14e44.189c8d6c3222f08e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2b39e14e44,UID:ci-4081.3.6-n-2b39e14e44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2b39e14e44,},FirstTimestamp:2026-03-14 00:24:09.11426779 +0000 UTC m=+0.785621712,LastTimestamp:2026-03-14 00:24:09.11426779 +0000 UTC m=+0.785621712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2b39e14e44,}" Mar 14 00:24:09.122097 kubelet[2891]: E0314 00:24:09.122076 2891 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:24:09.124075 kubelet[2891]: I0314 00:24:09.123991 2891 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:24:09.125242 kubelet[2891]: I0314 00:24:09.125201 2891 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:24:09.127863 kubelet[2891]: I0314 00:24:09.127846 2891 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:24:09.128205 kubelet[2891]: E0314 00:24:09.128182 2891 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" Mar 14 00:24:09.129602 kubelet[2891]: I0314 00:24:09.129547 2891 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:24:09.130284 kubelet[2891]: I0314 00:24:09.129822 2891 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:24:09.130284 kubelet[2891]: I0314 00:24:09.130071 2891 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:24:09.132981 kubelet[2891]: I0314 00:24:09.132324 2891 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:24:09.132981 kubelet[2891]: I0314 00:24:09.132376 2891 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:24:09.134361 kubelet[2891]: E0314 00:24:09.133300 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2b39e14e44?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="200ms" Mar 14 00:24:09.134361 kubelet[2891]: E0314 00:24:09.134115 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:24:09.135279 kubelet[2891]: I0314 00:24:09.135192 2891 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:24:09.135279 kubelet[2891]: I0314 00:24:09.135211 2891 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:24:09.135397 kubelet[2891]: I0314 00:24:09.135344 2891 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:24:09.147471 kubelet[2891]: I0314 00:24:09.147421 2891 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:24:09.148565 kubelet[2891]: I0314 00:24:09.148535 2891 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:24:09.148565 kubelet[2891]: I0314 00:24:09.148565 2891 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:24:09.148706 kubelet[2891]: I0314 00:24:09.148588 2891 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:24:09.148706 kubelet[2891]: I0314 00:24:09.148597 2891 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:24:09.148706 kubelet[2891]: E0314 00:24:09.148642 2891 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:24:09.157601 kubelet[2891]: E0314 00:24:09.157570 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:24:09.175953 kubelet[2891]: I0314 00:24:09.175721 2891 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:24:09.175953 kubelet[2891]: I0314 00:24:09.175738 2891 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:24:09.175953 kubelet[2891]: I0314 00:24:09.175758 2891 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:24:09.180865 kubelet[2891]: I0314 00:24:09.180674 2891 policy_none.go:49] "None policy: Start" Mar 14 00:24:09.180865 kubelet[2891]: I0314 00:24:09.180691 2891 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:24:09.180865 kubelet[2891]: I0314 00:24:09.180702 2891 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:24:09.190139 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:24:09.204480 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:24:09.208589 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:24:09.217044 kubelet[2891]: E0314 00:24:09.217015 2891 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:24:09.217395 kubelet[2891]: I0314 00:24:09.217286 2891 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:24:09.217395 kubelet[2891]: I0314 00:24:09.217301 2891 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:24:09.217718 kubelet[2891]: I0314 00:24:09.217688 2891 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:24:09.220136 kubelet[2891]: E0314 00:24:09.219253 2891 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:24:09.220354 kubelet[2891]: E0314 00:24:09.220311 2891 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-2b39e14e44\" not found" Mar 14 00:24:09.261800 systemd[1]: Created slice kubepods-burstable-podf0d9056eb4ec0c71544f08717ea2fbd0.slice - libcontainer container kubepods-burstable-podf0d9056eb4ec0c71544f08717ea2fbd0.slice. Mar 14 00:24:09.272771 kubelet[2891]: E0314 00:24:09.272215 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.276591 systemd[1]: Created slice kubepods-burstable-pod348456414f40a7243a021da3ec812b61.slice - libcontainer container kubepods-burstable-pod348456414f40a7243a021da3ec812b61.slice. Mar 14 00:24:09.282352 kubelet[2891]: E0314 00:24:09.282325 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.284983 systemd[1]: Created slice kubepods-burstable-pod54e223ef917e6f2fd02bc511fc4d932d.slice - libcontainer container kubepods-burstable-pod54e223ef917e6f2fd02bc511fc4d932d.slice. Mar 14 00:24:09.286814 kubelet[2891]: E0314 00:24:09.286790 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.319372 kubelet[2891]: I0314 00:24:09.319326 2891 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.319742 kubelet[2891]: E0314 00:24:09.319713 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.333936 kubelet[2891]: E0314 00:24:09.333902 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2b39e14e44?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="400ms" Mar 14 00:24:09.433855 kubelet[2891]: I0314 00:24:09.433466 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.433855 kubelet[2891]: I0314 00:24:09.433518 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.433855 kubelet[2891]: I0314 00:24:09.433552 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0d9056eb4ec0c71544f08717ea2fbd0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" (UID: \"f0d9056eb4ec0c71544f08717ea2fbd0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.433855 kubelet[2891]: I0314 00:24:09.433576 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0d9056eb4ec0c71544f08717ea2fbd0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" (UID: \"f0d9056eb4ec0c71544f08717ea2fbd0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.433855 kubelet[2891]: I0314 00:24:09.433605 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.434859 kubelet[2891]: I0314 00:24:09.433628 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.434859 kubelet[2891]: I0314 00:24:09.433651 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54e223ef917e6f2fd02bc511fc4d932d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2b39e14e44\" (UID: \"54e223ef917e6f2fd02bc511fc4d932d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.434859 kubelet[2891]: I0314 00:24:09.433675 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0d9056eb4ec0c71544f08717ea2fbd0-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" (UID: \"f0d9056eb4ec0c71544f08717ea2fbd0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.434859 kubelet[2891]: I0314 00:24:09.433700 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.522525 kubelet[2891]: I0314 00:24:09.522492 2891 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.522886 kubelet[2891]: E0314 00:24:09.522846 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.573394 containerd[1710]: time="2026-03-14T00:24:09.573347850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2b39e14e44,Uid:f0d9056eb4ec0c71544f08717ea2fbd0,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:09.585150 containerd[1710]: time="2026-03-14T00:24:09.585109429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2b39e14e44,Uid:348456414f40a7243a021da3ec812b61,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:09.587883 containerd[1710]: time="2026-03-14T00:24:09.587722968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2b39e14e44,Uid:54e223ef917e6f2fd02bc511fc4d932d,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:09.735011 kubelet[2891]: E0314 00:24:09.734884 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2b39e14e44?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="800ms" Mar 14 00:24:09.852439 kubelet[2891]: E0314 00:24:09.852328 2891 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2b39e14e44.189c8d6c3222f08e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2b39e14e44,UID:ci-4081.3.6-n-2b39e14e44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2b39e14e44,},FirstTimestamp:2026-03-14 00:24:09.11426779 +0000 UTC m=+0.785621712,LastTimestamp:2026-03-14 00:24:09.11426779 +0000 UTC m=+0.785621712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2b39e14e44,}" Mar 14 00:24:09.925181 kubelet[2891]: I0314 00:24:09.925147 2891 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.925538 kubelet[2891]: E0314 00:24:09.925506 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:09.949735 kubelet[2891]: E0314 00:24:09.949693 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:24:10.210702 kubelet[2891]: E0314 00:24:10.210660 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2b39e14e44&limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:24:10.258560 kubelet[2891]: E0314 00:24:10.258508 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:24:10.516076 kubelet[2891]: E0314 00:24:10.515944 2891 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:24:10.536273 kubelet[2891]: E0314 00:24:10.536206 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2b39e14e44?timeout=10s\": dial tcp 10.200.8.29:6443: connect: connection refused" interval="1.6s" Mar 14 00:24:10.728017 kubelet[2891]: I0314 00:24:10.727982 2891 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:10.728403 kubelet[2891]: E0314 00:24:10.728366 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.29:6443/api/v1/nodes\": dial tcp 10.200.8.29:6443: connect: connection refused" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:11.053677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361165099.mount: Deactivated successfully. Mar 14 00:24:11.078166 containerd[1710]: time="2026-03-14T00:24:11.078116264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:11.081543 containerd[1710]: time="2026-03-14T00:24:11.081503015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:11.084710 containerd[1710]: time="2026-03-14T00:24:11.084510361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 14 00:24:11.087931 containerd[1710]: time="2026-03-14T00:24:11.087731110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:24:11.091198 containerd[1710]: time="2026-03-14T00:24:11.091162062Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:11.094639 containerd[1710]: time="2026-03-14T00:24:11.094524613Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:11.098389 containerd[1710]: time="2026-03-14T00:24:11.098086667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:24:11.102452 containerd[1710]: time="2026-03-14T00:24:11.102418232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:24:11.103345 containerd[1710]: time="2026-03-14T00:24:11.103314746Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.515524577s" Mar 14 00:24:11.107987 containerd[1710]: time="2026-03-14T00:24:11.107902016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.522702985s" Mar 14 00:24:11.109086 containerd[1710]: time="2026-03-14T00:24:11.109058033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.535624282s" Mar 14 00:24:11.225856 kubelet[2891]: E0314 00:24:11.225811 2891 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:24:11.375427 containerd[1710]: time="2026-03-14T00:24:11.374589059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:11.375427 containerd[1710]: time="2026-03-14T00:24:11.374654060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:11.375427 containerd[1710]: time="2026-03-14T00:24:11.374689860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:11.375427 containerd[1710]: time="2026-03-14T00:24:11.374786262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:11.381601 containerd[1710]: time="2026-03-14T00:24:11.381396062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:11.381601 containerd[1710]: time="2026-03-14T00:24:11.381449863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:11.381601 containerd[1710]: time="2026-03-14T00:24:11.381461963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:11.381601 containerd[1710]: time="2026-03-14T00:24:11.381531264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:11.387866 containerd[1710]: time="2026-03-14T00:24:11.387626956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:11.387866 containerd[1710]: time="2026-03-14T00:24:11.387685457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:11.387866 containerd[1710]: time="2026-03-14T00:24:11.387702858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:11.387866 containerd[1710]: time="2026-03-14T00:24:11.387790659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:11.416613 systemd[1]: Started cri-containerd-5cfb662b8998dd52cdfef1b59e39adf0ce02fb76e0cbdd29af6ea0a988501b35.scope - libcontainer container 5cfb662b8998dd52cdfef1b59e39adf0ce02fb76e0cbdd29af6ea0a988501b35. Mar 14 00:24:11.423297 systemd[1]: Started cri-containerd-9795b9b81dafdd23aa37668cd59c002097b44b0e403f8a06838d110b3164347a.scope - libcontainer container 9795b9b81dafdd23aa37668cd59c002097b44b0e403f8a06838d110b3164347a. Mar 14 00:24:11.425718 systemd[1]: Started cri-containerd-d3e7a5b3faf75bc60b2c28ee6579677af025c33148cb3a9d24d2d6b676a77321.scope - libcontainer container d3e7a5b3faf75bc60b2c28ee6579677af025c33148cb3a9d24d2d6b676a77321. Mar 14 00:24:11.489985 containerd[1710]: time="2026-03-14T00:24:11.489806906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2b39e14e44,Uid:f0d9056eb4ec0c71544f08717ea2fbd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cfb662b8998dd52cdfef1b59e39adf0ce02fb76e0cbdd29af6ea0a988501b35\"" Mar 14 00:24:11.507629 containerd[1710]: time="2026-03-14T00:24:11.507582775Z" level=info msg="CreateContainer within sandbox \"5cfb662b8998dd52cdfef1b59e39adf0ce02fb76e0cbdd29af6ea0a988501b35\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:24:11.513922 containerd[1710]: time="2026-03-14T00:24:11.513885371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2b39e14e44,Uid:348456414f40a7243a021da3ec812b61,Namespace:kube-system,Attempt:0,} returns sandbox id \"9795b9b81dafdd23aa37668cd59c002097b44b0e403f8a06838d110b3164347a\"" Mar 14 00:24:11.520390 containerd[1710]: time="2026-03-14T00:24:11.520338369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2b39e14e44,Uid:54e223ef917e6f2fd02bc511fc4d932d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e7a5b3faf75bc60b2c28ee6579677af025c33148cb3a9d24d2d6b676a77321\"" Mar 14 00:24:11.521818 containerd[1710]: time="2026-03-14T00:24:11.521787591Z" level=info msg="CreateContainer within sandbox \"9795b9b81dafdd23aa37668cd59c002097b44b0e403f8a06838d110b3164347a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:24:11.530728 containerd[1710]: time="2026-03-14T00:24:11.530629225Z" level=info msg="CreateContainer within sandbox \"d3e7a5b3faf75bc60b2c28ee6579677af025c33148cb3a9d24d2d6b676a77321\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:24:11.569886 containerd[1710]: time="2026-03-14T00:24:11.569721717Z" level=info msg="CreateContainer within sandbox \"5cfb662b8998dd52cdfef1b59e39adf0ce02fb76e0cbdd29af6ea0a988501b35\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c6bb12b1283ba2ef640052e5e93adf6613f7a21b49df0d2a1453f67e3bdfc44\"" Mar 14 00:24:11.570646 containerd[1710]: time="2026-03-14T00:24:11.570611531Z" level=info msg="StartContainer for \"5c6bb12b1283ba2ef640052e5e93adf6613f7a21b49df0d2a1453f67e3bdfc44\"" Mar 14 00:24:11.590110 containerd[1710]: time="2026-03-14T00:24:11.590059426Z" level=info msg="CreateContainer within sandbox \"d3e7a5b3faf75bc60b2c28ee6579677af025c33148cb3a9d24d2d6b676a77321\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"adadf6cb377eb1753c51c5a40365d042add0747a70807ce66894bd524c7f9e4e\"" Mar 14 00:24:11.592009 containerd[1710]: time="2026-03-14T00:24:11.590708535Z" level=info msg="StartContainer for \"adadf6cb377eb1753c51c5a40365d042add0747a70807ce66894bd524c7f9e4e\"" Mar 14 00:24:11.598346 containerd[1710]: time="2026-03-14T00:24:11.598197556Z" level=info msg="CreateContainer within sandbox \"9795b9b81dafdd23aa37668cd59c002097b44b0e403f8a06838d110b3164347a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"25a321ba6f1152a7e094f6c8dec19e731e80981e7f9b20c1ed730445174b0a11\"" Mar 14 00:24:11.599536 containerd[1710]: time="2026-03-14T00:24:11.598825866Z" level=info msg="StartContainer for \"25a321ba6f1152a7e094f6c8dec19e731e80981e7f9b20c1ed730445174b0a11\"" Mar 14 00:24:11.603451 systemd[1]: Started cri-containerd-5c6bb12b1283ba2ef640052e5e93adf6613f7a21b49df0d2a1453f67e3bdfc44.scope - libcontainer container 5c6bb12b1283ba2ef640052e5e93adf6613f7a21b49df0d2a1453f67e3bdfc44. Mar 14 00:24:11.638555 systemd[1]: Started cri-containerd-adadf6cb377eb1753c51c5a40365d042add0747a70807ce66894bd524c7f9e4e.scope - libcontainer container adadf6cb377eb1753c51c5a40365d042add0747a70807ce66894bd524c7f9e4e. Mar 14 00:24:11.655849 systemd[1]: Started cri-containerd-25a321ba6f1152a7e094f6c8dec19e731e80981e7f9b20c1ed730445174b0a11.scope - libcontainer container 25a321ba6f1152a7e094f6c8dec19e731e80981e7f9b20c1ed730445174b0a11. Mar 14 00:24:11.686577 containerd[1710]: time="2026-03-14T00:24:11.686311577Z" level=info msg="StartContainer for \"5c6bb12b1283ba2ef640052e5e93adf6613f7a21b49df0d2a1453f67e3bdfc44\" returns successfully" Mar 14 00:24:11.731691 containerd[1710]: time="2026-03-14T00:24:11.731324803Z" level=info msg="StartContainer for \"adadf6cb377eb1753c51c5a40365d042add0747a70807ce66894bd524c7f9e4e\" returns successfully" Mar 14 00:24:11.766912 containerd[1710]: time="2026-03-14T00:24:11.766863776Z" level=info msg="StartContainer for \"25a321ba6f1152a7e094f6c8dec19e731e80981e7f9b20c1ed730445174b0a11\" returns successfully" Mar 14 00:24:12.171882 kubelet[2891]: E0314 00:24:12.171840 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:12.178496 kubelet[2891]: E0314 00:24:12.178082 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:12.179306 kubelet[2891]: E0314 00:24:12.179284 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:12.332554 kubelet[2891]: I0314 00:24:12.331165 2891 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:13.182821 kubelet[2891]: E0314 00:24:13.182765 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:13.184864 kubelet[2891]: E0314 00:24:13.184512 2891 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.389333 kubelet[2891]: E0314 00:24:14.388888 2891 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-2b39e14e44\" not found" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.579343 kubelet[2891]: I0314 00:24:14.579289 2891 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.628920 kubelet[2891]: I0314 00:24:14.628874 2891 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.646290 kubelet[2891]: E0314 00:24:14.644522 2891 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.646290 kubelet[2891]: I0314 00:24:14.644564 2891 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.652154 kubelet[2891]: E0314 00:24:14.651871 2891 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.652154 kubelet[2891]: I0314 00:24:14.651909 2891 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.655121 kubelet[2891]: E0314 00:24:14.655074 2891 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2b39e14e44\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.661777 kubelet[2891]: I0314 00:24:14.661503 2891 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:14.684923 kubelet[2891]: E0314 00:24:14.684597 2891 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:15.112182 kubelet[2891]: I0314 00:24:15.112137 2891 apiserver.go:52] "Watching apiserver" Mar 14 00:24:15.133336 kubelet[2891]: I0314 00:24:15.133293 2891 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:24:15.370170 kubelet[2891]: I0314 00:24:15.369763 2891 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:15.386052 kubelet[2891]: I0314 00:24:15.386006 2891 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:24:17.608504 systemd[1]: Reloading requested from client PID 3169 ('systemctl') (unit session-9.scope)... Mar 14 00:24:17.608523 systemd[1]: Reloading... Mar 14 00:24:17.707259 zram_generator::config[3205]: No configuration found. Mar 14 00:24:17.855032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:24:17.952606 systemd[1]: Reloading finished in 343 ms. Mar 14 00:24:17.994763 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:18.004790 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:24:18.005080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:18.011528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:24:18.217323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:24:18.224560 (kubelet)[3276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:24:18.264960 kubelet[3276]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:24:18.264960 kubelet[3276]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:24:18.264960 kubelet[3276]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:24:18.265464 kubelet[3276]: I0314 00:24:18.265036 3276 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:24:18.272356 kubelet[3276]: I0314 00:24:18.272316 3276 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:24:18.272356 kubelet[3276]: I0314 00:24:18.272347 3276 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:24:18.272593 kubelet[3276]: I0314 00:24:18.272573 3276 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:24:18.273735 kubelet[3276]: I0314 00:24:18.273707 3276 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:24:18.276247 kubelet[3276]: I0314 00:24:18.275678 3276 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:24:18.278865 kubelet[3276]: E0314 00:24:18.278831 3276 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:24:18.278988 kubelet[3276]: I0314 00:24:18.278957 3276 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:24:18.282387 kubelet[3276]: I0314 00:24:18.282344 3276 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:24:18.282592 kubelet[3276]: I0314 00:24:18.282551 3276 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:24:18.282747 kubelet[3276]: I0314 00:24:18.282585 3276 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2b39e14e44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:24:18.282873 kubelet[3276]: I0314 00:24:18.282755 3276 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:24:18.282873 kubelet[3276]: I0314 00:24:18.282768 3276 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:24:18.282873 kubelet[3276]: I0314 00:24:18.282824 3276 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:24:18.283191 kubelet[3276]: I0314 00:24:18.283040 3276 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:24:18.283191 kubelet[3276]: I0314 00:24:18.283061 3276 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:24:18.283191 kubelet[3276]: I0314 00:24:18.283090 3276 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:24:18.283191 kubelet[3276]: I0314 00:24:18.283110 3276 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:24:18.293373 kubelet[3276]: I0314 00:24:18.293352 3276 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:24:18.294243 kubelet[3276]: I0314 00:24:18.294150 3276 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:24:18.301686 kubelet[3276]: I0314 00:24:18.301669 3276 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:24:18.301876 kubelet[3276]: I0314 00:24:18.301832 3276 server.go:1289] "Started kubelet" Mar 14 00:24:18.304283 kubelet[3276]: I0314 00:24:18.304206 3276 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:24:18.306281 kubelet[3276]: I0314 00:24:18.304867 3276 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:24:18.306281 kubelet[3276]: I0314 00:24:18.305874 3276 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:24:18.309080 kubelet[3276]: I0314 00:24:18.309028 3276 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:24:18.309594 kubelet[3276]: I0314 00:24:18.309302 3276 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:24:18.309594 kubelet[3276]: I0314 00:24:18.309554 3276 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:24:18.310888 kubelet[3276]: I0314 00:24:18.310862 3276 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:24:18.311127 kubelet[3276]: E0314 00:24:18.311103 3276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2b39e14e44\" not found" Mar 14 00:24:18.311888 kubelet[3276]: I0314 00:24:18.311795 3276 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:24:18.312056 kubelet[3276]: I0314 00:24:18.311965 3276 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:24:18.316125 kubelet[3276]: I0314 00:24:18.316082 3276 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:24:18.319247 kubelet[3276]: I0314 00:24:18.318962 3276 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:24:18.319247 kubelet[3276]: I0314 00:24:18.318996 3276 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:24:18.365964 kubelet[3276]: I0314 00:24:18.365918 3276 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:24:18.370272 kubelet[3276]: I0314 00:24:18.370218 3276 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:24:18.370272 kubelet[3276]: I0314 00:24:18.370269 3276 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:24:18.370461 kubelet[3276]: I0314 00:24:18.370291 3276 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:24:18.370461 kubelet[3276]: I0314 00:24:18.370301 3276 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:24:18.370461 kubelet[3276]: E0314 00:24:18.370350 3276 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:24:18.743308 kubelet[3276]: E0314 00:24:18.742941 3276 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:24:18.754006 kubelet[3276]: I0314 00:24:18.753948 3276 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:24:18.754006 kubelet[3276]: I0314 00:24:18.753976 3276 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:24:18.754006 kubelet[3276]: I0314 00:24:18.754000 3276 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:24:18.754329 kubelet[3276]: I0314 00:24:18.754158 3276 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:24:18.754329 kubelet[3276]: I0314 00:24:18.754172 3276 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:24:18.754329 kubelet[3276]: I0314 00:24:18.754194 3276 policy_none.go:49] "None policy: Start" Mar 14 00:24:18.754329 kubelet[3276]: I0314 00:24:18.754218 3276 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:24:18.754329 kubelet[3276]: I0314 00:24:18.754258 3276 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:24:18.754515 kubelet[3276]: I0314 00:24:18.754380 3276 state_mem.go:75] "Updated machine memory state" Mar 14 00:24:18.762208 kubelet[3276]: E0314 00:24:18.760917 3276 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:24:18.762208 kubelet[3276]: I0314 00:24:18.761129 3276 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:24:18.762208 kubelet[3276]: I0314 00:24:18.761148 3276 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:24:18.762208 kubelet[3276]: I0314 00:24:18.761515 3276 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:24:18.766990 kubelet[3276]: E0314 00:24:18.764317 3276 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:24:18.870532 kubelet[3276]: I0314 00:24:18.870393 3276 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:18.881648 kubelet[3276]: I0314 00:24:18.881592 3276 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:18.881861 kubelet[3276]: I0314 00:24:18.881780 3276 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:18.944427 kubelet[3276]: I0314 00:24:18.944146 3276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:18.944427 kubelet[3276]: I0314 00:24:18.944252 3276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:18.944427 kubelet[3276]: I0314 00:24:18.944146 3276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:18.954064 kubelet[3276]: I0314 00:24:18.953806 3276 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:24:18.958057 kubelet[3276]: I0314 00:24:18.958029 3276 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:24:18.958162 kubelet[3276]: E0314 00:24:18.958093 3276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:18.958410 kubelet[3276]: I0314 00:24:18.958377 3276 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:24:19.046152 kubelet[3276]: I0314 00:24:19.045957 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0d9056eb4ec0c71544f08717ea2fbd0-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" (UID: \"f0d9056eb4ec0c71544f08717ea2fbd0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.046152 kubelet[3276]: I0314 00:24:19.046001 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0d9056eb4ec0c71544f08717ea2fbd0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" (UID: \"f0d9056eb4ec0c71544f08717ea2fbd0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.046152 kubelet[3276]: I0314 00:24:19.046066 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0d9056eb4ec0c71544f08717ea2fbd0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" (UID: \"f0d9056eb4ec0c71544f08717ea2fbd0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.046152 kubelet[3276]: I0314 00:24:19.046136 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.047215 kubelet[3276]: I0314 00:24:19.046167 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.047215 kubelet[3276]: I0314 00:24:19.046192 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54e223ef917e6f2fd02bc511fc4d932d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2b39e14e44\" (UID: \"54e223ef917e6f2fd02bc511fc4d932d\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.047215 kubelet[3276]: I0314 00:24:19.046211 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.047215 kubelet[3276]: I0314 00:24:19.046268 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.047215 kubelet[3276]: I0314 00:24:19.046291 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/348456414f40a7243a021da3ec812b61-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2b39e14e44\" (UID: \"348456414f40a7243a021da3ec812b61\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.289494 kubelet[3276]: I0314 00:24:19.289451 3276 apiserver.go:52] "Watching apiserver" Mar 14 00:24:19.312822 kubelet[3276]: I0314 00:24:19.312714 3276 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:24:19.390163 kubelet[3276]: I0314 00:24:19.389982 3276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.400799 kubelet[3276]: I0314 00:24:19.400761 3276 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:24:19.400974 kubelet[3276]: E0314 00:24:19.400828 3276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2b39e14e44\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" Mar 14 00:24:19.422783 kubelet[3276]: I0314 00:24:19.422703 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2b39e14e44" podStartSLOduration=1.422682854 podStartE2EDuration="1.422682854s" podCreationTimestamp="2026-03-14 00:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:19.413524206 +0000 UTC m=+1.183200084" watchObservedRunningTime="2026-03-14 00:24:19.422682854 +0000 UTC m=+1.192358732" Mar 14 00:24:19.433244 kubelet[3276]: I0314 00:24:19.432841 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2b39e14e44" podStartSLOduration=4.432821517 podStartE2EDuration="4.432821517s" podCreationTimestamp="2026-03-14 00:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:19.42306086 +0000 UTC m=+1.192736638" watchObservedRunningTime="2026-03-14 00:24:19.432821517 +0000 UTC m=+1.202497295" Mar 14 00:24:19.433244 kubelet[3276]: I0314 00:24:19.433099 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2b39e14e44" podStartSLOduration=1.433075422 podStartE2EDuration="1.433075422s" podCreationTimestamp="2026-03-14 00:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:19.43238171 +0000 UTC m=+1.202057488" watchObservedRunningTime="2026-03-14 00:24:19.433075422 +0000 UTC m=+1.202751200" Mar 14 00:24:21.467804 kubelet[3276]: I0314 00:24:21.467406 3276 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:24:21.468712 containerd[1710]: time="2026-03-14T00:24:21.468138920Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:24:21.469180 kubelet[3276]: I0314 00:24:21.469151 3276 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:24:22.498448 systemd[1]: Created slice kubepods-besteffort-pod316afc0a_bcf9_4a3a_b8f1_537fba1b081e.slice - libcontainer container kubepods-besteffort-pod316afc0a_bcf9_4a3a_b8f1_537fba1b081e.slice. Mar 14 00:24:22.566696 kubelet[3276]: I0314 00:24:22.566486 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/316afc0a-bcf9-4a3a-b8f1-537fba1b081e-kube-proxy\") pod \"kube-proxy-qjpz6\" (UID: \"316afc0a-bcf9-4a3a-b8f1-537fba1b081e\") " pod="kube-system/kube-proxy-qjpz6" Mar 14 00:24:22.566696 kubelet[3276]: I0314 00:24:22.566544 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/316afc0a-bcf9-4a3a-b8f1-537fba1b081e-xtables-lock\") pod \"kube-proxy-qjpz6\" (UID: \"316afc0a-bcf9-4a3a-b8f1-537fba1b081e\") " pod="kube-system/kube-proxy-qjpz6" Mar 14 00:24:22.566696 kubelet[3276]: I0314 00:24:22.566572 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/316afc0a-bcf9-4a3a-b8f1-537fba1b081e-lib-modules\") pod \"kube-proxy-qjpz6\" (UID: \"316afc0a-bcf9-4a3a-b8f1-537fba1b081e\") " pod="kube-system/kube-proxy-qjpz6" Mar 14 00:24:22.566696 kubelet[3276]: I0314 00:24:22.566594 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlgw\" (UniqueName: \"kubernetes.io/projected/316afc0a-bcf9-4a3a-b8f1-537fba1b081e-kube-api-access-gnlgw\") pod \"kube-proxy-qjpz6\" (UID: \"316afc0a-bcf9-4a3a-b8f1-537fba1b081e\") " pod="kube-system/kube-proxy-qjpz6" Mar 14 00:24:22.647519 systemd[1]: Created slice kubepods-besteffort-pod06631492_3bc8_4177_a70a_ce4a25ae98a5.slice - libcontainer container kubepods-besteffort-pod06631492_3bc8_4177_a70a_ce4a25ae98a5.slice. Mar 14 00:24:22.767871 kubelet[3276]: I0314 00:24:22.767687 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06631492-3bc8-4177-a70a-ce4a25ae98a5-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-f76sf\" (UID: \"06631492-3bc8-4177-a70a-ce4a25ae98a5\") " pod="tigera-operator/tigera-operator-6bf85f8dd-f76sf" Mar 14 00:24:22.767871 kubelet[3276]: I0314 00:24:22.767747 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8d8\" (UniqueName: \"kubernetes.io/projected/06631492-3bc8-4177-a70a-ce4a25ae98a5-kube-api-access-hg8d8\") pod \"tigera-operator-6bf85f8dd-f76sf\" (UID: \"06631492-3bc8-4177-a70a-ce4a25ae98a5\") " pod="tigera-operator/tigera-operator-6bf85f8dd-f76sf" Mar 14 00:24:22.808267 containerd[1710]: time="2026-03-14T00:24:22.808012496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjpz6,Uid:316afc0a-bcf9-4a3a-b8f1-537fba1b081e,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:22.855439 containerd[1710]: time="2026-03-14T00:24:22.855198731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:22.855439 containerd[1710]: time="2026-03-14T00:24:22.855340134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:22.855439 containerd[1710]: time="2026-03-14T00:24:22.855375934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:22.856440 containerd[1710]: time="2026-03-14T00:24:22.856254648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:22.883512 systemd[1]: Started cri-containerd-add6591310694d3b8d6f038088adebe086f1c9ebc349b6d5ff48a55fff5fa846.scope - libcontainer container add6591310694d3b8d6f038088adebe086f1c9ebc349b6d5ff48a55fff5fa846. Mar 14 00:24:22.910121 containerd[1710]: time="2026-03-14T00:24:22.910037986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjpz6,Uid:316afc0a-bcf9-4a3a-b8f1-537fba1b081e,Namespace:kube-system,Attempt:0,} returns sandbox id \"add6591310694d3b8d6f038088adebe086f1c9ebc349b6d5ff48a55fff5fa846\"" Mar 14 00:24:22.919700 containerd[1710]: time="2026-03-14T00:24:22.919649435Z" level=info msg="CreateContainer within sandbox \"add6591310694d3b8d6f038088adebe086f1c9ebc349b6d5ff48a55fff5fa846\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:24:22.953825 containerd[1710]: time="2026-03-14T00:24:22.953778967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-f76sf,Uid:06631492-3bc8-4177-a70a-ce4a25ae98a5,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:24:22.972069 containerd[1710]: time="2026-03-14T00:24:22.972019651Z" level=info msg="CreateContainer within sandbox \"add6591310694d3b8d6f038088adebe086f1c9ebc349b6d5ff48a55fff5fa846\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a5a89b3c7c93ae7032f150aef3385863dbfde56104dd138c087af2ca7b9e2e90\"" Mar 14 00:24:22.973096 containerd[1710]: time="2026-03-14T00:24:22.973049067Z" level=info msg="StartContainer for \"a5a89b3c7c93ae7032f150aef3385863dbfde56104dd138c087af2ca7b9e2e90\"" Mar 14 00:24:23.002442 systemd[1]: Started cri-containerd-a5a89b3c7c93ae7032f150aef3385863dbfde56104dd138c087af2ca7b9e2e90.scope - libcontainer container a5a89b3c7c93ae7032f150aef3385863dbfde56104dd138c087af2ca7b9e2e90. Mar 14 00:24:23.017340 containerd[1710]: time="2026-03-14T00:24:23.017069053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:23.017340 containerd[1710]: time="2026-03-14T00:24:23.017135154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:23.017340 containerd[1710]: time="2026-03-14T00:24:23.017188355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:23.018188 containerd[1710]: time="2026-03-14T00:24:23.018052569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:23.045685 systemd[1]: Started cri-containerd-1c7fcb3ab264072062f821bb94c5b971ec069799a8fafcea2f67df18eb2377ba.scope - libcontainer container 1c7fcb3ab264072062f821bb94c5b971ec069799a8fafcea2f67df18eb2377ba. Mar 14 00:24:23.054554 containerd[1710]: time="2026-03-14T00:24:23.054392635Z" level=info msg="StartContainer for \"a5a89b3c7c93ae7032f150aef3385863dbfde56104dd138c087af2ca7b9e2e90\" returns successfully" Mar 14 00:24:23.117089 containerd[1710]: time="2026-03-14T00:24:23.117049111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-f76sf,Uid:06631492-3bc8-4177-a70a-ce4a25ae98a5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1c7fcb3ab264072062f821bb94c5b971ec069799a8fafcea2f67df18eb2377ba\"" Mar 14 00:24:23.119984 containerd[1710]: time="2026-03-14T00:24:23.119945256Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:24:23.434261 kubelet[3276]: I0314 00:24:23.433523 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjpz6" podStartSLOduration=1.433503042 podStartE2EDuration="1.433503042s" podCreationTimestamp="2026-03-14 00:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:23.433197037 +0000 UTC m=+5.202872815" watchObservedRunningTime="2026-03-14 00:24:23.433503042 +0000 UTC m=+5.203178820" Mar 14 00:24:24.491131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126182213.mount: Deactivated successfully. Mar 14 00:24:26.658493 containerd[1710]: time="2026-03-14T00:24:26.658437088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:26.661614 containerd[1710]: time="2026-03-14T00:24:26.661489235Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:24:26.664340 containerd[1710]: time="2026-03-14T00:24:26.664300679Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:26.669285 containerd[1710]: time="2026-03-14T00:24:26.669025553Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:26.670655 containerd[1710]: time="2026-03-14T00:24:26.670079969Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.550095012s" Mar 14 00:24:26.670655 containerd[1710]: time="2026-03-14T00:24:26.670117970Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:24:26.678164 containerd[1710]: time="2026-03-14T00:24:26.678130995Z" level=info msg="CreateContainer within sandbox \"1c7fcb3ab264072062f821bb94c5b971ec069799a8fafcea2f67df18eb2377ba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:24:26.730027 containerd[1710]: time="2026-03-14T00:24:26.729978803Z" level=info msg="CreateContainer within sandbox \"1c7fcb3ab264072062f821bb94c5b971ec069799a8fafcea2f67df18eb2377ba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0e9ee8fc3cabb5379a476766d34ee500ce466e998662c3b979e18d55022ce2c9\"" Mar 14 00:24:26.730736 containerd[1710]: time="2026-03-14T00:24:26.730682314Z" level=info msg="StartContainer for \"0e9ee8fc3cabb5379a476766d34ee500ce466e998662c3b979e18d55022ce2c9\"" Mar 14 00:24:26.766391 systemd[1]: Started cri-containerd-0e9ee8fc3cabb5379a476766d34ee500ce466e998662c3b979e18d55022ce2c9.scope - libcontainer container 0e9ee8fc3cabb5379a476766d34ee500ce466e998662c3b979e18d55022ce2c9. Mar 14 00:24:26.796906 containerd[1710]: time="2026-03-14T00:24:26.796862345Z" level=info msg="StartContainer for \"0e9ee8fc3cabb5379a476766d34ee500ce466e998662c3b979e18d55022ce2c9\" returns successfully" Mar 14 00:24:27.464909 kubelet[3276]: I0314 00:24:27.464615 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-f76sf" podStartSLOduration=1.9127033070000001 podStartE2EDuration="5.464594548s" podCreationTimestamp="2026-03-14 00:24:22 +0000 UTC" firstStartedPulling="2026-03-14 00:24:23.119192544 +0000 UTC m=+4.888868322" lastFinishedPulling="2026-03-14 00:24:26.671083785 +0000 UTC m=+8.440759563" observedRunningTime="2026-03-14 00:24:27.446616568 +0000 UTC m=+9.216292346" watchObservedRunningTime="2026-03-14 00:24:27.464594548 +0000 UTC m=+9.234270326" Mar 14 00:24:31.318204 sudo[2334]: pam_unix(sudo:session): session closed for user root Mar 14 00:24:31.422535 sshd[2331]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:31.426141 systemd-logind[1694]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:24:31.427679 systemd[1]: sshd@6-10.200.8.29:22-10.200.16.10:33504.service: Deactivated successfully. Mar 14 00:24:31.431865 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:24:31.432719 systemd[1]: session-9.scope: Consumed 5.438s CPU time, 158.1M memory peak, 0B memory swap peak. Mar 14 00:24:31.437812 systemd-logind[1694]: Removed session 9. Mar 14 00:24:35.141145 systemd[1]: Created slice kubepods-besteffort-pod0062a846_6646_4776_ac84_23933a8bc878.slice - libcontainer container kubepods-besteffort-pod0062a846_6646_4776_ac84_23933a8bc878.slice. Mar 14 00:24:35.241705 systemd[1]: Created slice kubepods-besteffort-podc18a9dba_97f0_4325_a765_0fbdd273e8c8.slice - libcontainer container kubepods-besteffort-podc18a9dba_97f0_4325_a765_0fbdd273e8c8.slice. Mar 14 00:24:35.247750 kubelet[3276]: I0314 00:24:35.247717 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0062a846-6646-4776-ac84-23933a8bc878-tigera-ca-bundle\") pod \"calico-typha-5ccf48bbbb-br25c\" (UID: \"0062a846-6646-4776-ac84-23933a8bc878\") " pod="calico-system/calico-typha-5ccf48bbbb-br25c" Mar 14 00:24:35.248948 kubelet[3276]: I0314 00:24:35.247764 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0062a846-6646-4776-ac84-23933a8bc878-typha-certs\") pod \"calico-typha-5ccf48bbbb-br25c\" (UID: \"0062a846-6646-4776-ac84-23933a8bc878\") " pod="calico-system/calico-typha-5ccf48bbbb-br25c" Mar 14 00:24:35.248948 kubelet[3276]: I0314 00:24:35.247788 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgblz\" (UniqueName: \"kubernetes.io/projected/0062a846-6646-4776-ac84-23933a8bc878-kube-api-access-tgblz\") pod \"calico-typha-5ccf48bbbb-br25c\" (UID: \"0062a846-6646-4776-ac84-23933a8bc878\") " pod="calico-system/calico-typha-5ccf48bbbb-br25c" Mar 14 00:24:35.326210 kubelet[3276]: E0314 00:24:35.326159 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:35.348060 kubelet[3276]: I0314 00:24:35.348012 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18a9dba-97f0-4325-a765-0fbdd273e8c8-tigera-ca-bundle\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349249 kubelet[3276]: I0314 00:24:35.348327 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-sys-fs\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349249 kubelet[3276]: I0314 00:24:35.348370 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqfp\" (UniqueName: \"kubernetes.io/projected/c18a9dba-97f0-4325-a765-0fbdd273e8c8-kube-api-access-dsqfp\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349249 kubelet[3276]: I0314 00:24:35.348394 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-flexvol-driver-host\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349249 kubelet[3276]: I0314 00:24:35.348416 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-lib-modules\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349249 kubelet[3276]: I0314 00:24:35.348441 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a-registration-dir\") pod \"csi-node-driver-8xrtx\" (UID: \"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a\") " pod="calico-system/csi-node-driver-8xrtx" Mar 14 00:24:35.349528 kubelet[3276]: I0314 00:24:35.348464 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-cni-log-dir\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349528 kubelet[3276]: I0314 00:24:35.348482 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-xtables-lock\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349528 kubelet[3276]: I0314 00:24:35.348502 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a-varrun\") pod \"csi-node-driver-8xrtx\" (UID: \"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a\") " pod="calico-system/csi-node-driver-8xrtx" Mar 14 00:24:35.349528 kubelet[3276]: I0314 00:24:35.348526 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-nodeproc\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349528 kubelet[3276]: I0314 00:24:35.348565 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-bpffs\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349735 kubelet[3276]: I0314 00:24:35.348627 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-cni-bin-dir\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349735 kubelet[3276]: I0314 00:24:35.348654 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-cni-net-dir\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349735 kubelet[3276]: I0314 00:24:35.348675 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c18a9dba-97f0-4325-a765-0fbdd273e8c8-node-certs\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349735 kubelet[3276]: I0314 00:24:35.348698 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-policysync\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349735 kubelet[3276]: I0314 00:24:35.348722 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-var-run-calico\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.349948 kubelet[3276]: I0314 00:24:35.348745 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a-kubelet-dir\") pod \"csi-node-driver-8xrtx\" (UID: \"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a\") " pod="calico-system/csi-node-driver-8xrtx" Mar 14 00:24:35.349948 kubelet[3276]: I0314 00:24:35.348770 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a-socket-dir\") pod \"csi-node-driver-8xrtx\" (UID: \"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a\") " pod="calico-system/csi-node-driver-8xrtx" Mar 14 00:24:35.349948 kubelet[3276]: I0314 00:24:35.348798 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjm8w\" (UniqueName: \"kubernetes.io/projected/c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a-kube-api-access-hjm8w\") pod \"csi-node-driver-8xrtx\" (UID: \"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a\") " pod="calico-system/csi-node-driver-8xrtx" Mar 14 00:24:35.349948 kubelet[3276]: I0314 00:24:35.348854 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c18a9dba-97f0-4325-a765-0fbdd273e8c8-var-lib-calico\") pod \"calico-node-292v6\" (UID: \"c18a9dba-97f0-4325-a765-0fbdd273e8c8\") " pod="calico-system/calico-node-292v6" Mar 14 00:24:35.446495 containerd[1710]: time="2026-03-14T00:24:35.446450594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ccf48bbbb-br25c,Uid:0062a846-6646-4776-ac84-23933a8bc878,Namespace:calico-system,Attempt:0,}" Mar 14 00:24:35.450900 kubelet[3276]: E0314 00:24:35.450774 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.450900 kubelet[3276]: W0314 00:24:35.450800 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.450900 kubelet[3276]: E0314 00:24:35.450827 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.451543 kubelet[3276]: E0314 00:24:35.451332 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.451543 kubelet[3276]: W0314 00:24:35.451347 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.451543 kubelet[3276]: E0314 00:24:35.451365 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.451967 kubelet[3276]: E0314 00:24:35.451950 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.451967 kubelet[3276]: W0314 00:24:35.451966 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.452202 kubelet[3276]: E0314 00:24:35.451983 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.452292 kubelet[3276]: E0314 00:24:35.452254 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.452292 kubelet[3276]: W0314 00:24:35.452266 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.452292 kubelet[3276]: E0314 00:24:35.452279 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.452594 kubelet[3276]: E0314 00:24:35.452524 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.452594 kubelet[3276]: W0314 00:24:35.452535 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.452594 kubelet[3276]: E0314 00:24:35.452547 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.452837 kubelet[3276]: E0314 00:24:35.452764 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.452837 kubelet[3276]: W0314 00:24:35.452775 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.452837 kubelet[3276]: E0314 00:24:35.452787 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.453105 kubelet[3276]: E0314 00:24:35.453086 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.453105 kubelet[3276]: W0314 00:24:35.453101 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.453280 kubelet[3276]: E0314 00:24:35.453115 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.453425 kubelet[3276]: E0314 00:24:35.453403 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.453488 kubelet[3276]: W0314 00:24:35.453438 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.453488 kubelet[3276]: E0314 00:24:35.453455 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.453727 kubelet[3276]: E0314 00:24:35.453709 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.453727 kubelet[3276]: W0314 00:24:35.453723 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.453990 kubelet[3276]: E0314 00:24:35.453745 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.454047 kubelet[3276]: E0314 00:24:35.453988 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.454136 kubelet[3276]: W0314 00:24:35.454000 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.454335 kubelet[3276]: E0314 00:24:35.454201 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.454603 kubelet[3276]: E0314 00:24:35.454583 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.454603 kubelet[3276]: W0314 00:24:35.454601 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.454848 kubelet[3276]: E0314 00:24:35.454615 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.454911 kubelet[3276]: E0314 00:24:35.454852 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.454911 kubelet[3276]: W0314 00:24:35.454863 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.454911 kubelet[3276]: E0314 00:24:35.454876 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.455142 kubelet[3276]: E0314 00:24:35.455116 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.455142 kubelet[3276]: W0314 00:24:35.455140 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.455308 kubelet[3276]: E0314 00:24:35.455156 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.455520 kubelet[3276]: E0314 00:24:35.455463 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.455520 kubelet[3276]: W0314 00:24:35.455477 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.455520 kubelet[3276]: E0314 00:24:35.455488 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.455792 kubelet[3276]: E0314 00:24:35.455725 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.455792 kubelet[3276]: W0314 00:24:35.455737 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.455792 kubelet[3276]: E0314 00:24:35.455751 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.456191 kubelet[3276]: E0314 00:24:35.456050 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.456191 kubelet[3276]: W0314 00:24:35.456065 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.456191 kubelet[3276]: E0314 00:24:35.456085 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.456552 kubelet[3276]: E0314 00:24:35.456305 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.456552 kubelet[3276]: W0314 00:24:35.456314 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.456552 kubelet[3276]: E0314 00:24:35.456323 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.456716 kubelet[3276]: E0314 00:24:35.456697 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.456716 kubelet[3276]: W0314 00:24:35.456712 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.456835 kubelet[3276]: E0314 00:24:35.456726 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.456962 kubelet[3276]: E0314 00:24:35.456946 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.456962 kubelet[3276]: W0314 00:24:35.456960 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.457065 kubelet[3276]: E0314 00:24:35.456973 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.457275 kubelet[3276]: E0314 00:24:35.457201 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.457275 kubelet[3276]: W0314 00:24:35.457259 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.457275 kubelet[3276]: E0314 00:24:35.457273 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.457773 kubelet[3276]: E0314 00:24:35.457643 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.457773 kubelet[3276]: W0314 00:24:35.457657 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.457773 kubelet[3276]: E0314 00:24:35.457671 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.458128 kubelet[3276]: E0314 00:24:35.457997 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.458128 kubelet[3276]: W0314 00:24:35.458020 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.458128 kubelet[3276]: E0314 00:24:35.458030 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.458367 kubelet[3276]: E0314 00:24:35.458261 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.458367 kubelet[3276]: W0314 00:24:35.458273 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.458367 kubelet[3276]: E0314 00:24:35.458285 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.458524 kubelet[3276]: E0314 00:24:35.458503 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.458524 kubelet[3276]: W0314 00:24:35.458516 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.458642 kubelet[3276]: E0314 00:24:35.458530 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.458790 kubelet[3276]: E0314 00:24:35.458774 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.458790 kubelet[3276]: W0314 00:24:35.458789 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.458982 kubelet[3276]: E0314 00:24:35.458802 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.459033 kubelet[3276]: E0314 00:24:35.459022 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.459126 kubelet[3276]: W0314 00:24:35.459032 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.459126 kubelet[3276]: E0314 00:24:35.459045 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.459322 kubelet[3276]: E0314 00:24:35.459306 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.459322 kubelet[3276]: W0314 00:24:35.459320 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.459475 kubelet[3276]: E0314 00:24:35.459334 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.459560 kubelet[3276]: E0314 00:24:35.459537 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.459560 kubelet[3276]: W0314 00:24:35.459548 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.459686 kubelet[3276]: E0314 00:24:35.459560 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.459933 kubelet[3276]: E0314 00:24:35.459915 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.459933 kubelet[3276]: W0314 00:24:35.459930 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.460045 kubelet[3276]: E0314 00:24:35.459944 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.460401 kubelet[3276]: E0314 00:24:35.460383 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.460401 kubelet[3276]: W0314 00:24:35.460398 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.462878 kubelet[3276]: E0314 00:24:35.460412 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.462878 kubelet[3276]: E0314 00:24:35.460657 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.462878 kubelet[3276]: W0314 00:24:35.460666 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.462878 kubelet[3276]: E0314 00:24:35.460675 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.462878 kubelet[3276]: E0314 00:24:35.460922 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.462878 kubelet[3276]: W0314 00:24:35.460933 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.462878 kubelet[3276]: E0314 00:24:35.460945 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.462878 kubelet[3276]: E0314 00:24:35.461305 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.462878 kubelet[3276]: W0314 00:24:35.461316 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.462878 kubelet[3276]: E0314 00:24:35.461330 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.463337 kubelet[3276]: E0314 00:24:35.461575 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.463337 kubelet[3276]: W0314 00:24:35.461586 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.463337 kubelet[3276]: E0314 00:24:35.461614 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.463337 kubelet[3276]: E0314 00:24:35.461856 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.463337 kubelet[3276]: W0314 00:24:35.461869 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.463337 kubelet[3276]: E0314 00:24:35.461881 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.463337 kubelet[3276]: E0314 00:24:35.462174 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.463337 kubelet[3276]: W0314 00:24:35.462184 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.463337 kubelet[3276]: E0314 00:24:35.462194 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.463337 kubelet[3276]: E0314 00:24:35.462481 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.463784 kubelet[3276]: W0314 00:24:35.462494 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.463784 kubelet[3276]: E0314 00:24:35.462507 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.463784 kubelet[3276]: E0314 00:24:35.462818 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.463784 kubelet[3276]: W0314 00:24:35.462830 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.463784 kubelet[3276]: E0314 00:24:35.462843 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.463784 kubelet[3276]: E0314 00:24:35.463037 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.463784 kubelet[3276]: W0314 00:24:35.463047 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.463784 kubelet[3276]: E0314 00:24:35.463059 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.463784 kubelet[3276]: E0314 00:24:35.463275 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.463784 kubelet[3276]: W0314 00:24:35.463286 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.464829 kubelet[3276]: E0314 00:24:35.463298 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.464829 kubelet[3276]: E0314 00:24:35.463486 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.464829 kubelet[3276]: W0314 00:24:35.463496 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.464829 kubelet[3276]: E0314 00:24:35.463507 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.464829 kubelet[3276]: E0314 00:24:35.463683 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.464829 kubelet[3276]: W0314 00:24:35.463691 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.464829 kubelet[3276]: E0314 00:24:35.463702 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.464829 kubelet[3276]: E0314 00:24:35.463888 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.464829 kubelet[3276]: W0314 00:24:35.463897 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.464829 kubelet[3276]: E0314 00:24:35.463909 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.465375 kubelet[3276]: E0314 00:24:35.464088 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.465375 kubelet[3276]: W0314 00:24:35.464097 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.465375 kubelet[3276]: E0314 00:24:35.464108 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.465375 kubelet[3276]: E0314 00:24:35.464392 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.465375 kubelet[3276]: W0314 00:24:35.464403 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.465375 kubelet[3276]: E0314 00:24:35.464416 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.465375 kubelet[3276]: E0314 00:24:35.464654 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.465375 kubelet[3276]: W0314 00:24:35.464667 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.465375 kubelet[3276]: E0314 00:24:35.464681 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.465375 kubelet[3276]: E0314 00:24:35.465016 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.467850 kubelet[3276]: W0314 00:24:35.465027 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.467850 kubelet[3276]: E0314 00:24:35.465041 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.467850 kubelet[3276]: E0314 00:24:35.465276 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.467850 kubelet[3276]: W0314 00:24:35.465287 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.467850 kubelet[3276]: E0314 00:24:35.465299 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.467850 kubelet[3276]: E0314 00:24:35.465508 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.467850 kubelet[3276]: W0314 00:24:35.465518 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.467850 kubelet[3276]: E0314 00:24:35.465531 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.467850 kubelet[3276]: E0314 00:24:35.465771 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.467850 kubelet[3276]: W0314 00:24:35.465783 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.469719 kubelet[3276]: E0314 00:24:35.465796 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.469719 kubelet[3276]: E0314 00:24:35.466342 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.469719 kubelet[3276]: W0314 00:24:35.466353 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.469719 kubelet[3276]: E0314 00:24:35.466366 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.469719 kubelet[3276]: E0314 00:24:35.466564 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.469719 kubelet[3276]: W0314 00:24:35.466574 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.469719 kubelet[3276]: E0314 00:24:35.466585 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.469719 kubelet[3276]: E0314 00:24:35.466798 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.469719 kubelet[3276]: W0314 00:24:35.466810 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.469719 kubelet[3276]: E0314 00:24:35.466821 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.471354 kubelet[3276]: E0314 00:24:35.467052 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.471354 kubelet[3276]: W0314 00:24:35.467064 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.471354 kubelet[3276]: E0314 00:24:35.467076 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.471354 kubelet[3276]: E0314 00:24:35.467361 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.471354 kubelet[3276]: W0314 00:24:35.467372 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.471354 kubelet[3276]: E0314 00:24:35.467384 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.471354 kubelet[3276]: E0314 00:24:35.467582 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.471354 kubelet[3276]: W0314 00:24:35.467592 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.471354 kubelet[3276]: E0314 00:24:35.467604 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.471354 kubelet[3276]: E0314 00:24:35.467841 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.474037 kubelet[3276]: W0314 00:24:35.467851 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.474037 kubelet[3276]: E0314 00:24:35.467866 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.474037 kubelet[3276]: E0314 00:24:35.468076 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.474037 kubelet[3276]: W0314 00:24:35.468088 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.474037 kubelet[3276]: E0314 00:24:35.468100 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.474037 kubelet[3276]: E0314 00:24:35.468328 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.474037 kubelet[3276]: W0314 00:24:35.468339 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.474037 kubelet[3276]: E0314 00:24:35.468351 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.474037 kubelet[3276]: E0314 00:24:35.468554 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.474037 kubelet[3276]: W0314 00:24:35.468565 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.475420 kubelet[3276]: E0314 00:24:35.468579 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.475420 kubelet[3276]: E0314 00:24:35.468767 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.475420 kubelet[3276]: W0314 00:24:35.468779 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.475420 kubelet[3276]: E0314 00:24:35.468791 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.475420 kubelet[3276]: E0314 00:24:35.469029 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.475420 kubelet[3276]: W0314 00:24:35.469041 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.475420 kubelet[3276]: E0314 00:24:35.469054 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.475420 kubelet[3276]: E0314 00:24:35.469353 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.475420 kubelet[3276]: W0314 00:24:35.469365 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.475420 kubelet[3276]: E0314 00:24:35.469379 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.476994 kubelet[3276]: E0314 00:24:35.469609 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.476994 kubelet[3276]: W0314 00:24:35.469620 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.476994 kubelet[3276]: E0314 00:24:35.469632 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.476994 kubelet[3276]: E0314 00:24:35.469870 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.476994 kubelet[3276]: W0314 00:24:35.469881 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.476994 kubelet[3276]: E0314 00:24:35.469893 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.476994 kubelet[3276]: E0314 00:24:35.470100 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.476994 kubelet[3276]: W0314 00:24:35.470112 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.476994 kubelet[3276]: E0314 00:24:35.470125 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.476994 kubelet[3276]: E0314 00:24:35.470361 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.477585 kubelet[3276]: W0314 00:24:35.470373 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.477585 kubelet[3276]: E0314 00:24:35.470385 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.477585 kubelet[3276]: E0314 00:24:35.470600 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.477585 kubelet[3276]: W0314 00:24:35.470611 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.477585 kubelet[3276]: E0314 00:24:35.470625 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.477585 kubelet[3276]: E0314 00:24:35.470818 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.477585 kubelet[3276]: W0314 00:24:35.470828 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.477585 kubelet[3276]: E0314 00:24:35.470839 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.477585 kubelet[3276]: E0314 00:24:35.471044 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.477585 kubelet[3276]: W0314 00:24:35.471056 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.480265 kubelet[3276]: E0314 00:24:35.471069 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.480265 kubelet[3276]: E0314 00:24:35.471327 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.480265 kubelet[3276]: W0314 00:24:35.471338 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.480265 kubelet[3276]: E0314 00:24:35.471349 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.480265 kubelet[3276]: E0314 00:24:35.471567 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.480265 kubelet[3276]: W0314 00:24:35.471578 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.480265 kubelet[3276]: E0314 00:24:35.471590 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.480265 kubelet[3276]: E0314 00:24:35.471810 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.480265 kubelet[3276]: W0314 00:24:35.471821 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.480265 kubelet[3276]: E0314 00:24:35.471833 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.480696 kubelet[3276]: E0314 00:24:35.472020 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.480696 kubelet[3276]: W0314 00:24:35.472029 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.480696 kubelet[3276]: E0314 00:24:35.472039 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.480696 kubelet[3276]: E0314 00:24:35.472188 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.480696 kubelet[3276]: W0314 00:24:35.472195 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.480696 kubelet[3276]: E0314 00:24:35.472204 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.480696 kubelet[3276]: E0314 00:24:35.472391 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.480696 kubelet[3276]: W0314 00:24:35.472401 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.480696 kubelet[3276]: E0314 00:24:35.472412 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.480696 kubelet[3276]: E0314 00:24:35.472584 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.481107 kubelet[3276]: W0314 00:24:35.472595 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.481107 kubelet[3276]: E0314 00:24:35.472608 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.481107 kubelet[3276]: E0314 00:24:35.473441 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.481107 kubelet[3276]: W0314 00:24:35.473454 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.481107 kubelet[3276]: E0314 00:24:35.473466 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.481107 kubelet[3276]: E0314 00:24:35.473645 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.481107 kubelet[3276]: W0314 00:24:35.473656 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.481107 kubelet[3276]: E0314 00:24:35.473668 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.481107 kubelet[3276]: E0314 00:24:35.474373 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.481107 kubelet[3276]: W0314 00:24:35.474386 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.485892 kubelet[3276]: E0314 00:24:35.474398 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.485892 kubelet[3276]: E0314 00:24:35.474942 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.485892 kubelet[3276]: W0314 00:24:35.474953 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.485892 kubelet[3276]: E0314 00:24:35.474966 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.485892 kubelet[3276]: E0314 00:24:35.475599 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.485892 kubelet[3276]: W0314 00:24:35.475611 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.485892 kubelet[3276]: E0314 00:24:35.475624 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.485892 kubelet[3276]: E0314 00:24:35.476296 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.485892 kubelet[3276]: W0314 00:24:35.476309 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.485892 kubelet[3276]: E0314 00:24:35.476338 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.486329 kubelet[3276]: E0314 00:24:35.485141 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.486329 kubelet[3276]: W0314 00:24:35.485154 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.486329 kubelet[3276]: E0314 00:24:35.485176 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.489802 kubelet[3276]: E0314 00:24:35.486503 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.489802 kubelet[3276]: W0314 00:24:35.486518 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.489802 kubelet[3276]: E0314 00:24:35.486531 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.489802 kubelet[3276]: E0314 00:24:35.486976 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.489802 kubelet[3276]: W0314 00:24:35.486988 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.489802 kubelet[3276]: E0314 00:24:35.488300 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.489802 kubelet[3276]: E0314 00:24:35.489626 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.489802 kubelet[3276]: W0314 00:24:35.489640 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.489802 kubelet[3276]: E0314 00:24:35.489658 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.502450 kubelet[3276]: E0314 00:24:35.500663 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.502450 kubelet[3276]: W0314 00:24:35.500695 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.502450 kubelet[3276]: E0314 00:24:35.500715 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.509551 kubelet[3276]: E0314 00:24:35.509525 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.510281 kubelet[3276]: W0314 00:24:35.510257 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.510383 kubelet[3276]: E0314 00:24:35.510359 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.511890 kubelet[3276]: E0314 00:24:35.511312 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.511890 kubelet[3276]: W0314 00:24:35.511330 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.511890 kubelet[3276]: E0314 00:24:35.511345 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.512254 kubelet[3276]: E0314 00:24:35.511899 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.512254 kubelet[3276]: W0314 00:24:35.511911 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.512254 kubelet[3276]: E0314 00:24:35.511925 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.524980 kubelet[3276]: E0314 00:24:35.523315 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:35.524980 kubelet[3276]: W0314 00:24:35.523335 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:35.524980 kubelet[3276]: E0314 00:24:35.523353 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:35.525938 containerd[1710]: time="2026-03-14T00:24:35.525417410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:35.525938 containerd[1710]: time="2026-03-14T00:24:35.525479411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:35.525938 containerd[1710]: time="2026-03-14T00:24:35.525498911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:35.526082 containerd[1710]: time="2026-03-14T00:24:35.525955318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:35.545782 containerd[1710]: time="2026-03-14T00:24:35.545741323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-292v6,Uid:c18a9dba-97f0-4325-a765-0fbdd273e8c8,Namespace:calico-system,Attempt:0,}" Mar 14 00:24:35.547469 systemd[1]: Started cri-containerd-ab6c774544b4ad48440c507c78d1a77ccd4c3d34af602e59291d496d7b70b117.scope - libcontainer container ab6c774544b4ad48440c507c78d1a77ccd4c3d34af602e59291d496d7b70b117. Mar 14 00:24:35.605258 containerd[1710]: time="2026-03-14T00:24:35.602281293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:35.605258 containerd[1710]: time="2026-03-14T00:24:35.602374195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:35.605258 containerd[1710]: time="2026-03-14T00:24:35.602634599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:35.608328 containerd[1710]: time="2026-03-14T00:24:35.606057551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:35.621253 containerd[1710]: time="2026-03-14T00:24:35.621161979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ccf48bbbb-br25c,Uid:0062a846-6646-4776-ac84-23933a8bc878,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab6c774544b4ad48440c507c78d1a77ccd4c3d34af602e59291d496d7b70b117\"" Mar 14 00:24:35.625036 containerd[1710]: time="2026-03-14T00:24:35.624755629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:24:35.639442 systemd[1]: Started cri-containerd-db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e.scope - libcontainer container db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e. Mar 14 00:24:35.665015 containerd[1710]: time="2026-03-14T00:24:35.664969687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-292v6,Uid:c18a9dba-97f0-4325-a765-0fbdd273e8c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\"" Mar 14 00:24:36.871988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2399608260.mount: Deactivated successfully. Mar 14 00:24:37.371459 kubelet[3276]: E0314 00:24:37.371405 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:38.327237 containerd[1710]: time="2026-03-14T00:24:38.327192248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.330909 containerd[1710]: time="2026-03-14T00:24:38.330852899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:24:38.333671 containerd[1710]: time="2026-03-14T00:24:38.333391634Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.338171 containerd[1710]: time="2026-03-14T00:24:38.338135000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:38.342258 containerd[1710]: time="2026-03-14T00:24:38.340641035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.715834705s" Mar 14 00:24:38.342258 containerd[1710]: time="2026-03-14T00:24:38.340685235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:24:38.347498 containerd[1710]: time="2026-03-14T00:24:38.347455429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:24:38.372718 containerd[1710]: time="2026-03-14T00:24:38.372674580Z" level=info msg="CreateContainer within sandbox \"ab6c774544b4ad48440c507c78d1a77ccd4c3d34af602e59291d496d7b70b117\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:24:38.397949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410079015.mount: Deactivated successfully. Mar 14 00:24:38.408426 containerd[1710]: time="2026-03-14T00:24:38.408372775Z" level=info msg="CreateContainer within sandbox \"ab6c774544b4ad48440c507c78d1a77ccd4c3d34af602e59291d496d7b70b117\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a73e21ac70390d2261d91ef189028a2f2f07cc014456fb93b440d0f629c9f466\"" Mar 14 00:24:38.409195 containerd[1710]: time="2026-03-14T00:24:38.409137386Z" level=info msg="StartContainer for \"a73e21ac70390d2261d91ef189028a2f2f07cc014456fb93b440d0f629c9f466\"" Mar 14 00:24:38.444449 systemd[1]: Started cri-containerd-a73e21ac70390d2261d91ef189028a2f2f07cc014456fb93b440d0f629c9f466.scope - libcontainer container a73e21ac70390d2261d91ef189028a2f2f07cc014456fb93b440d0f629c9f466. Mar 14 00:24:38.491553 containerd[1710]: time="2026-03-14T00:24:38.491510329Z" level=info msg="StartContainer for \"a73e21ac70390d2261d91ef189028a2f2f07cc014456fb93b440d0f629c9f466\" returns successfully" Mar 14 00:24:39.371028 kubelet[3276]: E0314 00:24:39.370981 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:39.558745 kubelet[3276]: E0314 00:24:39.558551 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.558745 kubelet[3276]: W0314 00:24:39.558581 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.558745 kubelet[3276]: E0314 00:24:39.558623 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.559111 kubelet[3276]: E0314 00:24:39.559041 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.559111 kubelet[3276]: W0314 00:24:39.559057 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.559111 kubelet[3276]: E0314 00:24:39.559073 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.559834 kubelet[3276]: E0314 00:24:39.559677 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.559834 kubelet[3276]: W0314 00:24:39.559694 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.559834 kubelet[3276]: E0314 00:24:39.559735 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.560093 kubelet[3276]: E0314 00:24:39.560075 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.560093 kubelet[3276]: W0314 00:24:39.560091 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.560201 kubelet[3276]: E0314 00:24:39.560182 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.560724 kubelet[3276]: E0314 00:24:39.560703 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.560823 kubelet[3276]: W0314 00:24:39.560722 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.560823 kubelet[3276]: E0314 00:24:39.560749 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.561080 kubelet[3276]: E0314 00:24:39.561063 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.561080 kubelet[3276]: W0314 00:24:39.561077 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.561185 kubelet[3276]: E0314 00:24:39.561090 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.561409 kubelet[3276]: E0314 00:24:39.561392 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.561409 kubelet[3276]: W0314 00:24:39.561406 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.561761 kubelet[3276]: E0314 00:24:39.561420 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.562051 kubelet[3276]: E0314 00:24:39.562033 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.562051 kubelet[3276]: W0314 00:24:39.562051 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.562178 kubelet[3276]: E0314 00:24:39.562064 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.562361 kubelet[3276]: E0314 00:24:39.562340 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.562361 kubelet[3276]: W0314 00:24:39.562356 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.562466 kubelet[3276]: E0314 00:24:39.562369 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.563023 kubelet[3276]: E0314 00:24:39.562819 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.563023 kubelet[3276]: W0314 00:24:39.562834 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.563023 kubelet[3276]: E0314 00:24:39.562847 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.563523 kubelet[3276]: E0314 00:24:39.563334 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.563523 kubelet[3276]: W0314 00:24:39.563349 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.563523 kubelet[3276]: E0314 00:24:39.563376 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.564021 kubelet[3276]: E0314 00:24:39.563780 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.564021 kubelet[3276]: W0314 00:24:39.563794 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.564021 kubelet[3276]: E0314 00:24:39.563807 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.564576 kubelet[3276]: E0314 00:24:39.564356 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.564576 kubelet[3276]: W0314 00:24:39.564369 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.564576 kubelet[3276]: E0314 00:24:39.564394 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.565013 kubelet[3276]: E0314 00:24:39.564889 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.565013 kubelet[3276]: W0314 00:24:39.564903 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.565013 kubelet[3276]: E0314 00:24:39.564915 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.565532 kubelet[3276]: E0314 00:24:39.565418 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.565532 kubelet[3276]: W0314 00:24:39.565433 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.565532 kubelet[3276]: E0314 00:24:39.565465 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.593039 kubelet[3276]: E0314 00:24:39.593008 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.593039 kubelet[3276]: W0314 00:24:39.593031 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.593283 kubelet[3276]: E0314 00:24:39.593052 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.593400 kubelet[3276]: E0314 00:24:39.593383 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.593465 kubelet[3276]: W0314 00:24:39.593399 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.593465 kubelet[3276]: E0314 00:24:39.593414 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.593741 kubelet[3276]: E0314 00:24:39.593723 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.593741 kubelet[3276]: W0314 00:24:39.593737 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.593888 kubelet[3276]: E0314 00:24:39.593751 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.594051 kubelet[3276]: E0314 00:24:39.594033 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.594051 kubelet[3276]: W0314 00:24:39.594049 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.594208 kubelet[3276]: E0314 00:24:39.594063 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.594395 kubelet[3276]: E0314 00:24:39.594372 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.594395 kubelet[3276]: W0314 00:24:39.594388 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.594504 kubelet[3276]: E0314 00:24:39.594401 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.594659 kubelet[3276]: E0314 00:24:39.594640 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.594727 kubelet[3276]: W0314 00:24:39.594670 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.594727 kubelet[3276]: E0314 00:24:39.594688 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.594921 kubelet[3276]: E0314 00:24:39.594902 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.594921 kubelet[3276]: W0314 00:24:39.594918 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.595050 kubelet[3276]: E0314 00:24:39.594931 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.595161 kubelet[3276]: E0314 00:24:39.595144 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.595161 kubelet[3276]: W0314 00:24:39.595158 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.595320 kubelet[3276]: E0314 00:24:39.595171 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.595466 kubelet[3276]: E0314 00:24:39.595452 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.595543 kubelet[3276]: W0314 00:24:39.595485 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.595543 kubelet[3276]: E0314 00:24:39.595501 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.596002 kubelet[3276]: E0314 00:24:39.595987 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.596274 kubelet[3276]: W0314 00:24:39.596081 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.596274 kubelet[3276]: E0314 00:24:39.596110 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.596552 kubelet[3276]: E0314 00:24:39.596536 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.596726 kubelet[3276]: W0314 00:24:39.596627 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.596726 kubelet[3276]: E0314 00:24:39.596645 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.596997 kubelet[3276]: E0314 00:24:39.596980 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.596997 kubelet[3276]: W0314 00:24:39.596995 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.597110 kubelet[3276]: E0314 00:24:39.597009 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.597413 kubelet[3276]: E0314 00:24:39.597395 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.597413 kubelet[3276]: W0314 00:24:39.597409 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.597531 kubelet[3276]: E0314 00:24:39.597422 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.597671 kubelet[3276]: E0314 00:24:39.597655 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.597671 kubelet[3276]: W0314 00:24:39.597669 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.597773 kubelet[3276]: E0314 00:24:39.597682 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.597918 kubelet[3276]: E0314 00:24:39.597903 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.597918 kubelet[3276]: W0314 00:24:39.597916 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.598047 kubelet[3276]: E0314 00:24:39.597929 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.598207 kubelet[3276]: E0314 00:24:39.598188 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.598207 kubelet[3276]: W0314 00:24:39.598205 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.598326 kubelet[3276]: E0314 00:24:39.598218 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.598685 kubelet[3276]: E0314 00:24:39.598667 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.598685 kubelet[3276]: W0314 00:24:39.598681 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.598802 kubelet[3276]: E0314 00:24:39.598694 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.598991 kubelet[3276]: E0314 00:24:39.598975 3276 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:24:39.598991 kubelet[3276]: W0314 00:24:39.598989 3276 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:24:39.599081 kubelet[3276]: E0314 00:24:39.599002 3276 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:24:39.602589 containerd[1710]: time="2026-03-14T00:24:39.602538154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:39.604790 containerd[1710]: time="2026-03-14T00:24:39.604611883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:24:39.608319 containerd[1710]: time="2026-03-14T00:24:39.608256134Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:39.612626 containerd[1710]: time="2026-03-14T00:24:39.612558793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:39.613639 containerd[1710]: time="2026-03-14T00:24:39.613351004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.265725972s" Mar 14 00:24:39.613639 containerd[1710]: time="2026-03-14T00:24:39.613392305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:24:39.620832 containerd[1710]: time="2026-03-14T00:24:39.620805308Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:24:39.656961 containerd[1710]: time="2026-03-14T00:24:39.656842008Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e\"" Mar 14 00:24:39.658315 containerd[1710]: time="2026-03-14T00:24:39.658257128Z" level=info msg="StartContainer for \"0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e\"" Mar 14 00:24:39.701936 systemd[1]: Started cri-containerd-0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e.scope - libcontainer container 0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e. Mar 14 00:24:39.733584 containerd[1710]: time="2026-03-14T00:24:39.733447272Z" level=info msg="StartContainer for \"0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e\" returns successfully" Mar 14 00:24:39.740205 systemd[1]: cri-containerd-0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e.scope: Deactivated successfully. Mar 14 00:24:39.765117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e-rootfs.mount: Deactivated successfully. Mar 14 00:24:40.469247 kubelet[3276]: I0314 00:24:40.469168 3276 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:40.490744 kubelet[3276]: I0314 00:24:40.490677 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5ccf48bbbb-br25c" podStartSLOduration=2.767520278 podStartE2EDuration="5.490655484s" podCreationTimestamp="2026-03-14 00:24:35 +0000 UTC" firstStartedPulling="2026-03-14 00:24:35.623665814 +0000 UTC m=+17.393341692" lastFinishedPulling="2026-03-14 00:24:38.34680112 +0000 UTC m=+20.116476898" observedRunningTime="2026-03-14 00:24:39.484507216 +0000 UTC m=+21.254183094" watchObservedRunningTime="2026-03-14 00:24:40.490655484 +0000 UTC m=+22.260331262" Mar 14 00:24:41.371564 kubelet[3276]: E0314 00:24:41.371489 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:41.646314 containerd[1710]: time="2026-03-14T00:24:41.646158527Z" level=info msg="shim disconnected" id=0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e namespace=k8s.io Mar 14 00:24:41.647060 containerd[1710]: time="2026-03-14T00:24:41.646799436Z" level=warning msg="cleaning up after shim disconnected" id=0fd6cfa913ab23720f66471f0efa604dda31595f30013a86fda6ec7edc99365e namespace=k8s.io Mar 14 00:24:41.647060 containerd[1710]: time="2026-03-14T00:24:41.646825736Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:42.475758 containerd[1710]: time="2026-03-14T00:24:42.475689048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:24:43.371623 kubelet[3276]: E0314 00:24:43.371521 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:45.370821 kubelet[3276]: E0314 00:24:45.370760 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:47.371278 kubelet[3276]: E0314 00:24:47.370680 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:48.556418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164354271.mount: Deactivated successfully. Mar 14 00:24:48.597601 containerd[1710]: time="2026-03-14T00:24:48.597539993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:48.601912 containerd[1710]: time="2026-03-14T00:24:48.601821758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:24:48.604716 containerd[1710]: time="2026-03-14T00:24:48.604650200Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:48.616024 containerd[1710]: time="2026-03-14T00:24:48.615939370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:48.618287 containerd[1710]: time="2026-03-14T00:24:48.616691182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.140954033s" Mar 14 00:24:48.618287 containerd[1710]: time="2026-03-14T00:24:48.616730082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:24:48.628454 containerd[1710]: time="2026-03-14T00:24:48.628418158Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:24:48.661783 containerd[1710]: time="2026-03-14T00:24:48.661726260Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8\"" Mar 14 00:24:48.663766 containerd[1710]: time="2026-03-14T00:24:48.662311169Z" level=info msg="StartContainer for \"830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8\"" Mar 14 00:24:48.696431 systemd[1]: Started cri-containerd-830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8.scope - libcontainer container 830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8. Mar 14 00:24:48.730602 containerd[1710]: time="2026-03-14T00:24:48.730550396Z" level=info msg="StartContainer for \"830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8\" returns successfully" Mar 14 00:24:48.779155 systemd[1]: cri-containerd-830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8.scope: Deactivated successfully. Mar 14 00:24:49.178908 kubelet[3276]: I0314 00:24:49.178867 3276 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:49.371271 kubelet[3276]: E0314 00:24:49.371207 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:49.554486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8-rootfs.mount: Deactivated successfully. Mar 14 00:24:51.370771 kubelet[3276]: E0314 00:24:51.370707 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:52.050191 containerd[1710]: time="2026-03-14T00:24:52.050114857Z" level=info msg="shim disconnected" id=830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8 namespace=k8s.io Mar 14 00:24:52.050191 containerd[1710]: time="2026-03-14T00:24:52.050185258Z" level=warning msg="cleaning up after shim disconnected" id=830bfc42c714d7182273de24c5fb6ed7d0b546b83c7e980a8731c13ac725a1f8 namespace=k8s.io Mar 14 00:24:52.050191 containerd[1710]: time="2026-03-14T00:24:52.050198958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:52.502544 containerd[1710]: time="2026-03-14T00:24:52.502496237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:24:53.371619 kubelet[3276]: E0314 00:24:53.371558 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:55.370656 kubelet[3276]: E0314 00:24:55.370592 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:57.372953 kubelet[3276]: E0314 00:24:57.372341 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:24:57.433909 containerd[1710]: time="2026-03-14T00:24:57.433859751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:57.436895 containerd[1710]: time="2026-03-14T00:24:57.436833595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:24:57.439978 containerd[1710]: time="2026-03-14T00:24:57.439903240Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:57.443914 containerd[1710]: time="2026-03-14T00:24:57.443852298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:57.444715 containerd[1710]: time="2026-03-14T00:24:57.444576609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.942028972s" Mar 14 00:24:57.444715 containerd[1710]: time="2026-03-14T00:24:57.444613210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:24:57.452316 containerd[1710]: time="2026-03-14T00:24:57.452285123Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:24:57.490088 containerd[1710]: time="2026-03-14T00:24:57.490051480Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11\"" Mar 14 00:24:57.490496 containerd[1710]: time="2026-03-14T00:24:57.490469087Z" level=info msg="StartContainer for \"ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11\"" Mar 14 00:24:57.534399 systemd[1]: Started cri-containerd-ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11.scope - libcontainer container ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11. Mar 14 00:24:57.571297 containerd[1710]: time="2026-03-14T00:24:57.570608270Z" level=info msg="StartContainer for \"ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11\" returns successfully" Mar 14 00:24:59.371678 kubelet[3276]: E0314 00:24:59.371609 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:25:01.370878 kubelet[3276]: E0314 00:25:01.370813 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:25:03.371495 kubelet[3276]: E0314 00:25:03.371432 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:25:04.556655 containerd[1710]: time="2026-03-14T00:25:04.556538533Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:25:04.558938 systemd[1]: cri-containerd-ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11.scope: Deactivated successfully. Mar 14 00:25:04.579894 kubelet[3276]: I0314 00:25:04.579786 3276 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:25:04.585462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11-rootfs.mount: Deactivated successfully. Mar 14 00:25:08.804125 containerd[1710]: time="2026-03-14T00:25:08.803037081Z" level=info msg="shim disconnected" id=ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11 namespace=k8s.io Mar 14 00:25:08.804125 containerd[1710]: time="2026-03-14T00:25:08.803335185Z" level=warning msg="cleaning up after shim disconnected" id=ce53e97b17b4b44199b7a058d595c76cd3b21bf3e8ac8e230819f3fcbabdab11 namespace=k8s.io Mar 14 00:25:08.804125 containerd[1710]: time="2026-03-14T00:25:08.803355886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:08.823480 systemd[1]: Created slice kubepods-besteffort-pod9da3f336_22fe_472c_9456_c10265166894.slice - libcontainer container kubepods-besteffort-pod9da3f336_22fe_472c_9456_c10265166894.slice. Mar 14 00:25:08.835891 systemd[1]: Created slice kubepods-besteffort-podd1646e2c_7985_4065_908a_efcf39003c75.slice - libcontainer container kubepods-besteffort-podd1646e2c_7985_4065_908a_efcf39003c75.slice. Mar 14 00:25:08.849280 systemd[1]: Created slice kubepods-besteffort-podc3cb8870_8b1c_43cc_a3a0_f78adc52bc6a.slice - libcontainer container kubepods-besteffort-podc3cb8870_8b1c_43cc_a3a0_f78adc52bc6a.slice. Mar 14 00:25:08.860277 containerd[1710]: time="2026-03-14T00:25:08.857215064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xrtx,Uid:c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:08.862833 systemd[1]: Created slice kubepods-burstable-pode9f998d1_a481_416b_8656_d361e4f465a0.slice - libcontainer container kubepods-burstable-pode9f998d1_a481_416b_8656_d361e4f465a0.slice. Mar 14 00:25:08.873070 systemd[1]: Created slice kubepods-besteffort-podc27e9d62_9278_4af0_93ab_bef306e53470.slice - libcontainer container kubepods-besteffort-podc27e9d62_9278_4af0_93ab_bef306e53470.slice. Mar 14 00:25:08.880219 systemd[1]: Created slice kubepods-besteffort-poda119f615_9d76_4da3_99fe_5e5a8a9503aa.slice - libcontainer container kubepods-besteffort-poda119f615_9d76_4da3_99fe_5e5a8a9503aa.slice. Mar 14 00:25:08.891543 systemd[1]: Created slice kubepods-burstable-pod66e0aa2f_87cf_44bf_9449_a14d5b459508.slice - libcontainer container kubepods-burstable-pod66e0aa2f_87cf_44bf_9449_a14d5b459508.slice. Mar 14 00:25:08.901498 kubelet[3276]: I0314 00:25:08.901457 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgtkg\" (UniqueName: \"kubernetes.io/projected/9da3f336-22fe-472c-9456-c10265166894-kube-api-access-xgtkg\") pod \"calico-kube-controllers-65cb8d5ccb-s59nv\" (UID: \"9da3f336-22fe-472c-9456-c10265166894\") " pod="calico-system/calico-kube-controllers-65cb8d5ccb-s59nv" Mar 14 00:25:08.908294 kubelet[3276]: I0314 00:25:08.901510 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnp6b\" (UniqueName: \"kubernetes.io/projected/d1646e2c-7985-4065-908a-efcf39003c75-kube-api-access-dnp6b\") pod \"goldmane-5b85766d88-zk5wg\" (UID: \"d1646e2c-7985-4065-908a-efcf39003c75\") " pod="calico-system/goldmane-5b85766d88-zk5wg" Mar 14 00:25:08.908294 kubelet[3276]: I0314 00:25:08.901536 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdwdj\" (UniqueName: \"kubernetes.io/projected/a119f615-9d76-4da3-99fe-5e5a8a9503aa-kube-api-access-rdwdj\") pod \"calico-apiserver-8b99f8b54-m46mb\" (UID: \"a119f615-9d76-4da3-99fe-5e5a8a9503aa\") " pod="calico-system/calico-apiserver-8b99f8b54-m46mb" Mar 14 00:25:08.908294 kubelet[3276]: I0314 00:25:08.901560 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd26410d-afb8-4e0e-9857-79a48d714e02-calico-apiserver-certs\") pod \"calico-apiserver-8b99f8b54-9bmxj\" (UID: \"bd26410d-afb8-4e0e-9857-79a48d714e02\") " pod="calico-system/calico-apiserver-8b99f8b54-9bmxj" Mar 14 00:25:08.908294 kubelet[3276]: I0314 00:25:08.901585 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-backend-key-pair\") pod \"whisker-b9d96df74-gq7fr\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " pod="calico-system/whisker-b9d96df74-gq7fr" Mar 14 00:25:08.908294 kubelet[3276]: I0314 00:25:08.901607 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9jq2\" (UniqueName: \"kubernetes.io/projected/c27e9d62-9278-4af0-93ab-bef306e53470-kube-api-access-b9jq2\") pod \"whisker-b9d96df74-gq7fr\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " pod="calico-system/whisker-b9d96df74-gq7fr" Mar 14 00:25:08.908738 kubelet[3276]: I0314 00:25:08.901632 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1646e2c-7985-4065-908a-efcf39003c75-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-zk5wg\" (UID: \"d1646e2c-7985-4065-908a-efcf39003c75\") " pod="calico-system/goldmane-5b85766d88-zk5wg" Mar 14 00:25:08.908738 kubelet[3276]: I0314 00:25:08.901658 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9da3f336-22fe-472c-9456-c10265166894-tigera-ca-bundle\") pod \"calico-kube-controllers-65cb8d5ccb-s59nv\" (UID: \"9da3f336-22fe-472c-9456-c10265166894\") " pod="calico-system/calico-kube-controllers-65cb8d5ccb-s59nv" Mar 14 00:25:08.908738 kubelet[3276]: I0314 00:25:08.901680 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9f998d1-a481-416b-8656-d361e4f465a0-config-volume\") pod \"coredns-674b8bbfcf-pbd56\" (UID: \"e9f998d1-a481-416b-8656-d361e4f465a0\") " pod="kube-system/coredns-674b8bbfcf-pbd56" Mar 14 00:25:08.908738 kubelet[3276]: I0314 00:25:08.901701 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-ca-bundle\") pod \"whisker-b9d96df74-gq7fr\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " pod="calico-system/whisker-b9d96df74-gq7fr" Mar 14 00:25:08.908738 kubelet[3276]: I0314 00:25:08.901734 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1646e2c-7985-4065-908a-efcf39003c75-goldmane-key-pair\") pod \"goldmane-5b85766d88-zk5wg\" (UID: \"d1646e2c-7985-4065-908a-efcf39003c75\") " pod="calico-system/goldmane-5b85766d88-zk5wg" Mar 14 00:25:08.908965 kubelet[3276]: I0314 00:25:08.901757 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-nginx-config\") pod \"whisker-b9d96df74-gq7fr\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " pod="calico-system/whisker-b9d96df74-gq7fr" Mar 14 00:25:08.908965 kubelet[3276]: I0314 00:25:08.901782 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a119f615-9d76-4da3-99fe-5e5a8a9503aa-calico-apiserver-certs\") pod \"calico-apiserver-8b99f8b54-m46mb\" (UID: \"a119f615-9d76-4da3-99fe-5e5a8a9503aa\") " pod="calico-system/calico-apiserver-8b99f8b54-m46mb" Mar 14 00:25:08.908965 kubelet[3276]: I0314 00:25:08.901807 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1646e2c-7985-4065-908a-efcf39003c75-config\") pod \"goldmane-5b85766d88-zk5wg\" (UID: \"d1646e2c-7985-4065-908a-efcf39003c75\") " pod="calico-system/goldmane-5b85766d88-zk5wg" Mar 14 00:25:08.908965 kubelet[3276]: I0314 00:25:08.901828 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfrs\" (UniqueName: \"kubernetes.io/projected/e9f998d1-a481-416b-8656-d361e4f465a0-kube-api-access-ndfrs\") pod \"coredns-674b8bbfcf-pbd56\" (UID: \"e9f998d1-a481-416b-8656-d361e4f465a0\") " pod="kube-system/coredns-674b8bbfcf-pbd56" Mar 14 00:25:08.908965 kubelet[3276]: I0314 00:25:08.901850 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4l9\" (UniqueName: \"kubernetes.io/projected/bd26410d-afb8-4e0e-9857-79a48d714e02-kube-api-access-6k4l9\") pod \"calico-apiserver-8b99f8b54-9bmxj\" (UID: \"bd26410d-afb8-4e0e-9857-79a48d714e02\") " pod="calico-system/calico-apiserver-8b99f8b54-9bmxj" Mar 14 00:25:08.916105 systemd[1]: Created slice kubepods-besteffort-podbd26410d_afb8_4e0e_9857_79a48d714e02.slice - libcontainer container kubepods-besteffort-podbd26410d_afb8_4e0e_9857_79a48d714e02.slice. Mar 14 00:25:08.983908 containerd[1710]: time="2026-03-14T00:25:08.983848894Z" level=error msg="Failed to destroy network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:08.986386 containerd[1710]: time="2026-03-14T00:25:08.984254100Z" level=error msg="encountered an error cleaning up failed sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:08.986386 containerd[1710]: time="2026-03-14T00:25:08.984321501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xrtx,Uid:c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:08.986555 kubelet[3276]: E0314 00:25:08.986411 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:08.986555 kubelet[3276]: E0314 00:25:08.986504 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8xrtx" Mar 14 00:25:08.986555 kubelet[3276]: E0314 00:25:08.986536 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8xrtx" Mar 14 00:25:08.986739 kubelet[3276]: E0314 00:25:08.986603 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8xrtx_calico-system(c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8xrtx_calico-system(c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:25:08.987712 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e-shm.mount: Deactivated successfully. Mar 14 00:25:09.006249 kubelet[3276]: I0314 00:25:09.002746 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66e0aa2f-87cf-44bf-9449-a14d5b459508-config-volume\") pod \"coredns-674b8bbfcf-j82zn\" (UID: \"66e0aa2f-87cf-44bf-9449-a14d5b459508\") " pod="kube-system/coredns-674b8bbfcf-j82zn" Mar 14 00:25:09.006249 kubelet[3276]: I0314 00:25:09.002926 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22pv\" (UniqueName: \"kubernetes.io/projected/66e0aa2f-87cf-44bf-9449-a14d5b459508-kube-api-access-f22pv\") pod \"coredns-674b8bbfcf-j82zn\" (UID: \"66e0aa2f-87cf-44bf-9449-a14d5b459508\") " pod="kube-system/coredns-674b8bbfcf-j82zn" Mar 14 00:25:09.134169 containerd[1710]: time="2026-03-14T00:25:09.134045065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cb8d5ccb-s59nv,Uid:9da3f336-22fe-472c-9456-c10265166894,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:09.144735 containerd[1710]: time="2026-03-14T00:25:09.144694819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-zk5wg,Uid:d1646e2c-7985-4065-908a-efcf39003c75,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:09.168356 containerd[1710]: time="2026-03-14T00:25:09.168288560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbd56,Uid:e9f998d1-a481-416b-8656-d361e4f465a0,Namespace:kube-system,Attempt:0,}" Mar 14 00:25:09.180715 containerd[1710]: time="2026-03-14T00:25:09.180666839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9d96df74-gq7fr,Uid:c27e9d62-9278-4af0-93ab-bef306e53470,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:09.188423 containerd[1710]: time="2026-03-14T00:25:09.188381650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-m46mb,Uid:a119f615-9d76-4da3-99fe-5e5a8a9503aa,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:09.212379 containerd[1710]: time="2026-03-14T00:25:09.212122193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j82zn,Uid:66e0aa2f-87cf-44bf-9449-a14d5b459508,Namespace:kube-system,Attempt:0,}" Mar 14 00:25:09.225203 containerd[1710]: time="2026-03-14T00:25:09.225150082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-9bmxj,Uid:bd26410d-afb8-4e0e-9857-79a48d714e02,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:09.242770 containerd[1710]: time="2026-03-14T00:25:09.242625234Z" level=error msg="Failed to destroy network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.243353 containerd[1710]: time="2026-03-14T00:25:09.243144142Z" level=error msg="encountered an error cleaning up failed sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.243353 containerd[1710]: time="2026-03-14T00:25:09.243215643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cb8d5ccb-s59nv,Uid:9da3f336-22fe-472c-9456-c10265166894,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.244457 kubelet[3276]: E0314 00:25:09.243555 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.244457 kubelet[3276]: E0314 00:25:09.243623 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cb8d5ccb-s59nv" Mar 14 00:25:09.244457 kubelet[3276]: E0314 00:25:09.243644 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65cb8d5ccb-s59nv" Mar 14 00:25:09.244613 kubelet[3276]: E0314 00:25:09.243702 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65cb8d5ccb-s59nv_calico-system(9da3f336-22fe-472c-9456-c10265166894)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65cb8d5ccb-s59nv_calico-system(9da3f336-22fe-472c-9456-c10265166894)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cb8d5ccb-s59nv" podUID="9da3f336-22fe-472c-9456-c10265166894" Mar 14 00:25:09.323110 containerd[1710]: time="2026-03-14T00:25:09.322573890Z" level=error msg="Failed to destroy network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.323110 containerd[1710]: time="2026-03-14T00:25:09.322916295Z" level=error msg="encountered an error cleaning up failed sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.323110 containerd[1710]: time="2026-03-14T00:25:09.322979995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-zk5wg,Uid:d1646e2c-7985-4065-908a-efcf39003c75,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.323451 kubelet[3276]: E0314 00:25:09.323247 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.323451 kubelet[3276]: E0314 00:25:09.323361 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-zk5wg" Mar 14 00:25:09.323451 kubelet[3276]: E0314 00:25:09.323394 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-zk5wg" Mar 14 00:25:09.323599 kubelet[3276]: E0314 00:25:09.323475 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-zk5wg_calico-system(d1646e2c-7985-4065-908a-efcf39003c75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-zk5wg_calico-system(d1646e2c-7985-4065-908a-efcf39003c75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-zk5wg" podUID="d1646e2c-7985-4065-908a-efcf39003c75" Mar 14 00:25:09.382374 containerd[1710]: time="2026-03-14T00:25:09.382216952Z" level=error msg="Failed to destroy network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.383292 containerd[1710]: time="2026-03-14T00:25:09.382756459Z" level=error msg="encountered an error cleaning up failed sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.383292 containerd[1710]: time="2026-03-14T00:25:09.382823260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbd56,Uid:e9f998d1-a481-416b-8656-d361e4f465a0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.383456 kubelet[3276]: E0314 00:25:09.383089 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.383456 kubelet[3276]: E0314 00:25:09.383161 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbd56" Mar 14 00:25:09.383765 kubelet[3276]: E0314 00:25:09.383191 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbd56" Mar 14 00:25:09.384802 kubelet[3276]: E0314 00:25:09.384211 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pbd56_kube-system(e9f998d1-a481-416b-8656-d361e4f465a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pbd56_kube-system(e9f998d1-a481-416b-8656-d361e4f465a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pbd56" podUID="e9f998d1-a481-416b-8656-d361e4f465a0" Mar 14 00:25:09.480661 containerd[1710]: time="2026-03-14T00:25:09.480485472Z" level=error msg="Failed to destroy network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.481332 containerd[1710]: time="2026-03-14T00:25:09.481104681Z" level=error msg="encountered an error cleaning up failed sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.481332 containerd[1710]: time="2026-03-14T00:25:09.481177282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9d96df74-gq7fr,Uid:c27e9d62-9278-4af0-93ab-bef306e53470,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.483025 kubelet[3276]: E0314 00:25:09.481647 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.483025 kubelet[3276]: E0314 00:25:09.481716 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b9d96df74-gq7fr" Mar 14 00:25:09.483025 kubelet[3276]: E0314 00:25:09.481741 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b9d96df74-gq7fr" Mar 14 00:25:09.483258 kubelet[3276]: E0314 00:25:09.481835 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b9d96df74-gq7fr_calico-system(c27e9d62-9278-4af0-93ab-bef306e53470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b9d96df74-gq7fr_calico-system(c27e9d62-9278-4af0-93ab-bef306e53470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b9d96df74-gq7fr" podUID="c27e9d62-9278-4af0-93ab-bef306e53470" Mar 14 00:25:09.502613 containerd[1710]: time="2026-03-14T00:25:09.502565591Z" level=error msg="Failed to destroy network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.503184 containerd[1710]: time="2026-03-14T00:25:09.503129999Z" level=error msg="encountered an error cleaning up failed sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.503439 containerd[1710]: time="2026-03-14T00:25:09.503392203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-m46mb,Uid:a119f615-9d76-4da3-99fe-5e5a8a9503aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.504255 kubelet[3276]: E0314 00:25:09.503801 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.504255 kubelet[3276]: E0314 00:25:09.503869 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8b99f8b54-m46mb" Mar 14 00:25:09.504255 kubelet[3276]: E0314 00:25:09.503900 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8b99f8b54-m46mb" Mar 14 00:25:09.504459 containerd[1710]: time="2026-03-14T00:25:09.503930211Z" level=error msg="Failed to destroy network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.504533 kubelet[3276]: E0314 00:25:09.503962 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8b99f8b54-m46mb_calico-system(a119f615-9d76-4da3-99fe-5e5a8a9503aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8b99f8b54-m46mb_calico-system(a119f615-9d76-4da3-99fe-5e5a8a9503aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8b99f8b54-m46mb" podUID="a119f615-9d76-4da3-99fe-5e5a8a9503aa" Mar 14 00:25:09.505064 containerd[1710]: time="2026-03-14T00:25:09.505013226Z" level=error msg="encountered an error cleaning up failed sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.505293 containerd[1710]: time="2026-03-14T00:25:09.505192229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j82zn,Uid:66e0aa2f-87cf-44bf-9449-a14d5b459508,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.505627 kubelet[3276]: E0314 00:25:09.505599 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.505807 kubelet[3276]: E0314 00:25:09.505751 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j82zn" Mar 14 00:25:09.505807 kubelet[3276]: E0314 00:25:09.505782 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j82zn" Mar 14 00:25:09.506115 kubelet[3276]: E0314 00:25:09.505928 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j82zn_kube-system(66e0aa2f-87cf-44bf-9449-a14d5b459508)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j82zn_kube-system(66e0aa2f-87cf-44bf-9449-a14d5b459508)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j82zn" podUID="66e0aa2f-87cf-44bf-9449-a14d5b459508" Mar 14 00:25:09.509556 containerd[1710]: time="2026-03-14T00:25:09.509522291Z" level=error msg="Failed to destroy network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.509841 containerd[1710]: time="2026-03-14T00:25:09.509813096Z" level=error msg="encountered an error cleaning up failed sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.509930 containerd[1710]: time="2026-03-14T00:25:09.509858196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-9bmxj,Uid:bd26410d-afb8-4e0e-9857-79a48d714e02,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.510034 kubelet[3276]: E0314 00:25:09.510007 3276 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.510090 kubelet[3276]: E0314 00:25:09.510045 3276 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8b99f8b54-9bmxj" Mar 14 00:25:09.510090 kubelet[3276]: E0314 00:25:09.510068 3276 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8b99f8b54-9bmxj" Mar 14 00:25:09.510179 kubelet[3276]: E0314 00:25:09.510112 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8b99f8b54-9bmxj_calico-system(bd26410d-afb8-4e0e-9857-79a48d714e02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8b99f8b54-9bmxj_calico-system(bd26410d-afb8-4e0e-9857-79a48d714e02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8b99f8b54-9bmxj" podUID="bd26410d-afb8-4e0e-9857-79a48d714e02" Mar 14 00:25:09.546782 kubelet[3276]: I0314 00:25:09.546751 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:25:09.549492 containerd[1710]: time="2026-03-14T00:25:09.549310266Z" level=info msg="StopPodSandbox for \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\"" Mar 14 00:25:09.549710 containerd[1710]: time="2026-03-14T00:25:09.549517269Z" level=info msg="Ensure that sandbox 799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d in task-service has been cleanup successfully" Mar 14 00:25:09.558511 kubelet[3276]: I0314 00:25:09.557584 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:09.560425 containerd[1710]: time="2026-03-14T00:25:09.560375726Z" level=info msg="StopPodSandbox for \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\"" Mar 14 00:25:09.560638 containerd[1710]: time="2026-03-14T00:25:09.560620030Z" level=info msg="Ensure that sandbox e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007 in task-service has been cleanup successfully" Mar 14 00:25:09.565399 kubelet[3276]: I0314 00:25:09.565369 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:25:09.566542 containerd[1710]: time="2026-03-14T00:25:09.566506715Z" level=info msg="StopPodSandbox for \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\"" Mar 14 00:25:09.568192 containerd[1710]: time="2026-03-14T00:25:09.568077638Z" level=info msg="Ensure that sandbox d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e in task-service has been cleanup successfully" Mar 14 00:25:09.584417 kubelet[3276]: I0314 00:25:09.584382 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:25:09.587757 containerd[1710]: time="2026-03-14T00:25:09.587395917Z" level=info msg="StopPodSandbox for \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\"" Mar 14 00:25:09.587757 containerd[1710]: time="2026-03-14T00:25:09.587614420Z" level=info msg="Ensure that sandbox c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264 in task-service has been cleanup successfully" Mar 14 00:25:09.596161 kubelet[3276]: I0314 00:25:09.596133 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:25:09.602480 containerd[1710]: time="2026-03-14T00:25:09.602270632Z" level=info msg="StopPodSandbox for \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\"" Mar 14 00:25:09.604491 containerd[1710]: time="2026-03-14T00:25:09.602486335Z" level=info msg="Ensure that sandbox adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2 in task-service has been cleanup successfully" Mar 14 00:25:09.604692 kubelet[3276]: I0314 00:25:09.604048 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:25:09.604768 containerd[1710]: time="2026-03-14T00:25:09.604744468Z" level=info msg="StopPodSandbox for \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\"" Mar 14 00:25:09.605175 containerd[1710]: time="2026-03-14T00:25:09.604924270Z" level=info msg="Ensure that sandbox e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953 in task-service has been cleanup successfully" Mar 14 00:25:09.608439 containerd[1710]: time="2026-03-14T00:25:09.608404421Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:25:09.628081 kubelet[3276]: I0314 00:25:09.627953 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:25:09.632017 containerd[1710]: time="2026-03-14T00:25:09.631946261Z" level=info msg="StopPodSandbox for \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\"" Mar 14 00:25:09.632543 containerd[1710]: time="2026-03-14T00:25:09.632392267Z" level=info msg="Ensure that sandbox 59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed in task-service has been cleanup successfully" Mar 14 00:25:09.651530 kubelet[3276]: I0314 00:25:09.649992 3276 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:25:09.656055 containerd[1710]: time="2026-03-14T00:25:09.655568602Z" level=info msg="StopPodSandbox for \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\"" Mar 14 00:25:09.656434 containerd[1710]: time="2026-03-14T00:25:09.656391614Z" level=info msg="Ensure that sandbox 7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa in task-service has been cleanup successfully" Mar 14 00:25:09.677593 containerd[1710]: time="2026-03-14T00:25:09.677530020Z" level=error msg="StopPodSandbox for \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\" failed" error="failed to destroy network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.678249 kubelet[3276]: E0314 00:25:09.678010 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:25:09.678249 kubelet[3276]: E0314 00:25:09.678080 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d"} Mar 14 00:25:09.678249 kubelet[3276]: E0314 00:25:09.678149 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a119f615-9d76-4da3-99fe-5e5a8a9503aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.678249 kubelet[3276]: E0314 00:25:09.678178 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a119f615-9d76-4da3-99fe-5e5a8a9503aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8b99f8b54-m46mb" podUID="a119f615-9d76-4da3-99fe-5e5a8a9503aa" Mar 14 00:25:09.700516 containerd[1710]: time="2026-03-14T00:25:09.700352849Z" level=info msg="CreateContainer within sandbox \"db2353bc90f487c9738ee8e23c1814743d69c45d8c4e61aabde2bd8fb3e37f6e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dadefb1f9345166cf9a01bba4d448b593679e54c4d3b8487eedfdf00fd3ef56b\"" Mar 14 00:25:09.704392 containerd[1710]: time="2026-03-14T00:25:09.703220791Z" level=info msg="StartContainer for \"dadefb1f9345166cf9a01bba4d448b593679e54c4d3b8487eedfdf00fd3ef56b\"" Mar 14 00:25:09.758049 containerd[1710]: time="2026-03-14T00:25:09.757868181Z" level=error msg="StopPodSandbox for \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\" failed" error="failed to destroy network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.758814 kubelet[3276]: E0314 00:25:09.758763 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:09.759262 kubelet[3276]: E0314 00:25:09.759221 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007"} Mar 14 00:25:09.759441 kubelet[3276]: E0314 00:25:09.759411 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c27e9d62-9278-4af0-93ab-bef306e53470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.759618 kubelet[3276]: E0314 00:25:09.759591 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c27e9d62-9278-4af0-93ab-bef306e53470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b9d96df74-gq7fr" podUID="c27e9d62-9278-4af0-93ab-bef306e53470" Mar 14 00:25:09.774031 containerd[1710]: time="2026-03-14T00:25:09.773886712Z" level=error msg="StopPodSandbox for \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\" failed" error="failed to destroy network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.777468 kubelet[3276]: E0314 00:25:09.774524 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:25:09.777468 kubelet[3276]: E0314 00:25:09.774596 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2"} Mar 14 00:25:09.777468 kubelet[3276]: E0314 00:25:09.774639 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1646e2c-7985-4065-908a-efcf39003c75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.777468 kubelet[3276]: E0314 00:25:09.774670 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1646e2c-7985-4065-908a-efcf39003c75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-zk5wg" podUID="d1646e2c-7985-4065-908a-efcf39003c75" Mar 14 00:25:09.783450 systemd[1]: Started cri-containerd-dadefb1f9345166cf9a01bba4d448b593679e54c4d3b8487eedfdf00fd3ef56b.scope - libcontainer container dadefb1f9345166cf9a01bba4d448b593679e54c4d3b8487eedfdf00fd3ef56b. Mar 14 00:25:09.786338 containerd[1710]: time="2026-03-14T00:25:09.786281591Z" level=error msg="StopPodSandbox for \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\" failed" error="failed to destroy network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.791150 containerd[1710]: time="2026-03-14T00:25:09.791102161Z" level=error msg="StopPodSandbox for \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\" failed" error="failed to destroy network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.792571 kubelet[3276]: E0314 00:25:09.792305 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:25:09.792571 kubelet[3276]: E0314 00:25:09.792361 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e"} Mar 14 00:25:09.792571 kubelet[3276]: E0314 00:25:09.792401 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.792571 kubelet[3276]: E0314 00:25:09.792432 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8xrtx" podUID="c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a" Mar 14 00:25:09.792931 kubelet[3276]: E0314 00:25:09.792465 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:25:09.792931 kubelet[3276]: E0314 00:25:09.792484 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa"} Mar 14 00:25:09.792931 kubelet[3276]: E0314 00:25:09.792508 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66e0aa2f-87cf-44bf-9449-a14d5b459508\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.792931 kubelet[3276]: E0314 00:25:09.792530 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66e0aa2f-87cf-44bf-9449-a14d5b459508\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j82zn" podUID="66e0aa2f-87cf-44bf-9449-a14d5b459508" Mar 14 00:25:09.816859 containerd[1710]: time="2026-03-14T00:25:09.816803032Z" level=error msg="StopPodSandbox for \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\" failed" error="failed to destroy network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.817676 containerd[1710]: time="2026-03-14T00:25:09.816960135Z" level=error msg="StopPodSandbox for \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\" failed" error="failed to destroy network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.818447 kubelet[3276]: E0314 00:25:09.817924 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:25:09.818447 kubelet[3276]: E0314 00:25:09.818005 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed"} Mar 14 00:25:09.818447 kubelet[3276]: E0314 00:25:09.818054 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9f998d1-a481-416b-8656-d361e4f465a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.818447 kubelet[3276]: E0314 00:25:09.818091 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9f998d1-a481-416b-8656-d361e4f465a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pbd56" podUID="e9f998d1-a481-416b-8656-d361e4f465a0" Mar 14 00:25:09.818766 kubelet[3276]: E0314 00:25:09.818293 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:25:09.818766 kubelet[3276]: E0314 00:25:09.818339 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264"} Mar 14 00:25:09.818766 kubelet[3276]: E0314 00:25:09.818378 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd26410d-afb8-4e0e-9857-79a48d714e02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.818766 kubelet[3276]: E0314 00:25:09.818410 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd26410d-afb8-4e0e-9857-79a48d714e02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8b99f8b54-9bmxj" podUID="bd26410d-afb8-4e0e-9857-79a48d714e02" Mar 14 00:25:09.830405 containerd[1710]: time="2026-03-14T00:25:09.829899122Z" level=error msg="StopPodSandbox for \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\" failed" error="failed to destroy network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:25:09.830559 kubelet[3276]: E0314 00:25:09.830157 3276 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:25:09.830559 kubelet[3276]: E0314 00:25:09.830219 3276 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953"} Mar 14 00:25:09.830559 kubelet[3276]: E0314 00:25:09.830311 3276 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9da3f336-22fe-472c-9456-c10265166894\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:25:09.830559 kubelet[3276]: E0314 00:25:09.830361 3276 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9da3f336-22fe-472c-9456-c10265166894\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65cb8d5ccb-s59nv" podUID="9da3f336-22fe-472c-9456-c10265166894" Mar 14 00:25:09.840707 containerd[1710]: time="2026-03-14T00:25:09.840624977Z" level=info msg="StartContainer for \"dadefb1f9345166cf9a01bba4d448b593679e54c4d3b8487eedfdf00fd3ef56b\" returns successfully" Mar 14 00:25:10.655846 containerd[1710]: time="2026-03-14T00:25:10.655796858Z" level=info msg="StopPodSandbox for \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\"" Mar 14 00:25:10.688537 kubelet[3276]: I0314 00:25:10.687392 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-292v6" podStartSLOduration=13.908475801 podStartE2EDuration="35.687368814s" podCreationTimestamp="2026-03-14 00:24:35 +0000 UTC" firstStartedPulling="2026-03-14 00:24:35.666495308 +0000 UTC m=+17.436171086" lastFinishedPulling="2026-03-14 00:24:57.445388321 +0000 UTC m=+39.215064099" observedRunningTime="2026-03-14 00:25:10.683390157 +0000 UTC m=+52.453065935" watchObservedRunningTime="2026-03-14 00:25:10.687368814 +0000 UTC m=+52.457044692" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.720 [INFO][4566] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.720 [INFO][4566] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" iface="eth0" netns="/var/run/netns/cni-8556e08b-45ef-669c-f013-488bb1f2cacd" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.720 [INFO][4566] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" iface="eth0" netns="/var/run/netns/cni-8556e08b-45ef-669c-f013-488bb1f2cacd" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.721 [INFO][4566] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" iface="eth0" netns="/var/run/netns/cni-8556e08b-45ef-669c-f013-488bb1f2cacd" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.721 [INFO][4566] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.721 [INFO][4566] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.741 [INFO][4573] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.741 [INFO][4573] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.741 [INFO][4573] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.747 [WARNING][4573] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.747 [INFO][4573] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.748 [INFO][4573] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:10.752854 containerd[1710]: 2026-03-14 00:25:10.751 [INFO][4566] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:10.755358 containerd[1710]: time="2026-03-14T00:25:10.755312796Z" level=info msg="TearDown network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\" successfully" Mar 14 00:25:10.755358 containerd[1710]: time="2026-03-14T00:25:10.755356997Z" level=info msg="StopPodSandbox for \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\" returns successfully" Mar 14 00:25:10.757361 systemd[1]: run-netns-cni\x2d8556e08b\x2d45ef\x2d669c\x2df013\x2d488bb1f2cacd.mount: Deactivated successfully. Mar 14 00:25:10.817766 kubelet[3276]: I0314 00:25:10.817718 3276 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-nginx-config\") pod \"c27e9d62-9278-4af0-93ab-bef306e53470\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " Mar 14 00:25:10.817766 kubelet[3276]: I0314 00:25:10.817766 3276 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-ca-bundle\") pod \"c27e9d62-9278-4af0-93ab-bef306e53470\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " Mar 14 00:25:10.817997 kubelet[3276]: I0314 00:25:10.817807 3276 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-backend-key-pair\") pod \"c27e9d62-9278-4af0-93ab-bef306e53470\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " Mar 14 00:25:10.817997 kubelet[3276]: I0314 00:25:10.817833 3276 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9jq2\" (UniqueName: \"kubernetes.io/projected/c27e9d62-9278-4af0-93ab-bef306e53470-kube-api-access-b9jq2\") pod \"c27e9d62-9278-4af0-93ab-bef306e53470\" (UID: \"c27e9d62-9278-4af0-93ab-bef306e53470\") " Mar 14 00:25:10.819188 kubelet[3276]: I0314 00:25:10.818616 3276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c27e9d62-9278-4af0-93ab-bef306e53470" (UID: "c27e9d62-9278-4af0-93ab-bef306e53470"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:25:10.819188 kubelet[3276]: I0314 00:25:10.818927 3276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "c27e9d62-9278-4af0-93ab-bef306e53470" (UID: "c27e9d62-9278-4af0-93ab-bef306e53470"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:25:10.821424 kubelet[3276]: I0314 00:25:10.821388 3276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27e9d62-9278-4af0-93ab-bef306e53470-kube-api-access-b9jq2" (OuterVolumeSpecName: "kube-api-access-b9jq2") pod "c27e9d62-9278-4af0-93ab-bef306e53470" (UID: "c27e9d62-9278-4af0-93ab-bef306e53470"). InnerVolumeSpecName "kube-api-access-b9jq2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:25:10.822714 kubelet[3276]: I0314 00:25:10.822684 3276 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c27e9d62-9278-4af0-93ab-bef306e53470" (UID: "c27e9d62-9278-4af0-93ab-bef306e53470"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:25:10.825667 systemd[1]: var-lib-kubelet-pods-c27e9d62\x2d9278\x2d4af0\x2d93ab\x2dbef306e53470-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db9jq2.mount: Deactivated successfully. Mar 14 00:25:10.825951 systemd[1]: var-lib-kubelet-pods-c27e9d62\x2d9278\x2d4af0\x2d93ab\x2dbef306e53470-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:25:10.918525 kubelet[3276]: I0314 00:25:10.918307 3276 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-nginx-config\") on node \"ci-4081.3.6-n-2b39e14e44\" DevicePath \"\"" Mar 14 00:25:10.918525 kubelet[3276]: I0314 00:25:10.918377 3276 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-ca-bundle\") on node \"ci-4081.3.6-n-2b39e14e44\" DevicePath \"\"" Mar 14 00:25:10.918525 kubelet[3276]: I0314 00:25:10.918391 3276 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c27e9d62-9278-4af0-93ab-bef306e53470-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-2b39e14e44\" DevicePath \"\"" Mar 14 00:25:10.918525 kubelet[3276]: I0314 00:25:10.918404 3276 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b9jq2\" (UniqueName: \"kubernetes.io/projected/c27e9d62-9278-4af0-93ab-bef306e53470-kube-api-access-b9jq2\") on node \"ci-4081.3.6-n-2b39e14e44\" DevicePath \"\"" Mar 14 00:25:11.662598 systemd[1]: Removed slice kubepods-besteffort-podc27e9d62_9278_4af0_93ab_bef306e53470.slice - libcontainer container kubepods-besteffort-podc27e9d62_9278_4af0_93ab_bef306e53470.slice. Mar 14 00:25:11.666305 kernel: calico-node[4669]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:25:11.772035 systemd[1]: Created slice kubepods-besteffort-podb7525d7b_5fbb_49b2_8e97_fedba39aaec1.slice - libcontainer container kubepods-besteffort-podb7525d7b_5fbb_49b2_8e97_fedba39aaec1.slice. Mar 14 00:25:11.824118 kubelet[3276]: I0314 00:25:11.823793 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7525d7b-5fbb-49b2-8e97-fedba39aaec1-whisker-backend-key-pair\") pod \"whisker-599f79b775-cd2k7\" (UID: \"b7525d7b-5fbb-49b2-8e97-fedba39aaec1\") " pod="calico-system/whisker-599f79b775-cd2k7" Mar 14 00:25:11.824118 kubelet[3276]: I0314 00:25:11.823864 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5n7t\" (UniqueName: \"kubernetes.io/projected/b7525d7b-5fbb-49b2-8e97-fedba39aaec1-kube-api-access-r5n7t\") pod \"whisker-599f79b775-cd2k7\" (UID: \"b7525d7b-5fbb-49b2-8e97-fedba39aaec1\") " pod="calico-system/whisker-599f79b775-cd2k7" Mar 14 00:25:11.824118 kubelet[3276]: I0314 00:25:11.823899 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b7525d7b-5fbb-49b2-8e97-fedba39aaec1-nginx-config\") pod \"whisker-599f79b775-cd2k7\" (UID: \"b7525d7b-5fbb-49b2-8e97-fedba39aaec1\") " pod="calico-system/whisker-599f79b775-cd2k7" Mar 14 00:25:11.824118 kubelet[3276]: I0314 00:25:11.823933 3276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7525d7b-5fbb-49b2-8e97-fedba39aaec1-whisker-ca-bundle\") pod \"whisker-599f79b775-cd2k7\" (UID: \"b7525d7b-5fbb-49b2-8e97-fedba39aaec1\") " pod="calico-system/whisker-599f79b775-cd2k7" Mar 14 00:25:12.078138 containerd[1710]: time="2026-03-14T00:25:12.078080013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-599f79b775-cd2k7,Uid:b7525d7b-5fbb-49b2-8e97-fedba39aaec1,Namespace:calico-system,Attempt:0,}" Mar 14 00:25:12.268583 systemd-networkd[1514]: cali3f47e00a125: Link UP Mar 14 00:25:12.268861 systemd-networkd[1514]: cali3f47e00a125: Gained carrier Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.161 [INFO][4714] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0 whisker-599f79b775- calico-system b7525d7b-5fbb-49b2-8e97-fedba39aaec1 938 0 2026-03-14 00:25:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:599f79b775 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 whisker-599f79b775-cd2k7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3f47e00a125 [] [] }} ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.162 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.198 [INFO][4725] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" HandleID="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.207 [INFO][4725] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" HandleID="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7a40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"whisker-599f79b775-cd2k7", "timestamp":"2026-03-14 00:25:12.198659356 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003609a0)} Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.207 [INFO][4725] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.207 [INFO][4725] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.207 [INFO][4725] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.211 [INFO][4725] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.217 [INFO][4725] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.222 [INFO][4725] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.225 [INFO][4725] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.229 [INFO][4725] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.229 [INFO][4725] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.232 [INFO][4725] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.237 [INFO][4725] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.248 [INFO][4725] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.65/26] block=192.168.3.64/26 handle="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.248 [INFO][4725] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.65/26] handle="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.248 [INFO][4725] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:12.310404 containerd[1710]: 2026-03-14 00:25:12.248 [INFO][4725] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.65/26] IPv6=[] ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" HandleID="k8s-pod-network.902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" Mar 14 00:25:12.313515 containerd[1710]: 2026-03-14 00:25:12.252 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0", GenerateName:"whisker-599f79b775-", Namespace:"calico-system", SelfLink:"", UID:"b7525d7b-5fbb-49b2-8e97-fedba39aaec1", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"599f79b775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"whisker-599f79b775-cd2k7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3f47e00a125", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:12.313515 containerd[1710]: 2026-03-14 00:25:12.252 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.65/32] ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" Mar 14 00:25:12.313515 containerd[1710]: 2026-03-14 00:25:12.252 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f47e00a125 ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" Mar 14 00:25:12.313515 containerd[1710]: 2026-03-14 00:25:12.269 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" Mar 14 00:25:12.313515 containerd[1710]: 2026-03-14 00:25:12.271 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0", GenerateName:"whisker-599f79b775-", Namespace:"calico-system", SelfLink:"", UID:"b7525d7b-5fbb-49b2-8e97-fedba39aaec1", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 25, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"599f79b775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a", Pod:"whisker-599f79b775-cd2k7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3f47e00a125", MAC:"22:38:d2:e8:68:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:12.313515 containerd[1710]: 2026-03-14 00:25:12.306 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a" Namespace="calico-system" Pod="whisker-599f79b775-cd2k7" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--599f79b775--cd2k7-eth0" Mar 14 00:25:12.339986 systemd-networkd[1514]: vxlan.calico: Link UP Mar 14 00:25:12.339997 systemd-networkd[1514]: vxlan.calico: Gained carrier Mar 14 00:25:12.377596 kubelet[3276]: I0314 00:25:12.376988 3276 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27e9d62-9278-4af0-93ab-bef306e53470" path="/var/lib/kubelet/pods/c27e9d62-9278-4af0-93ab-bef306e53470/volumes" Mar 14 00:25:12.377887 containerd[1710]: time="2026-03-14T00:25:12.377533941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:12.377993 containerd[1710]: time="2026-03-14T00:25:12.377847945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:12.378149 containerd[1710]: time="2026-03-14T00:25:12.378058148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:12.378450 containerd[1710]: time="2026-03-14T00:25:12.378369253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:12.413130 systemd[1]: Started cri-containerd-902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a.scope - libcontainer container 902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a. Mar 14 00:25:12.483106 containerd[1710]: time="2026-03-14T00:25:12.483043966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-599f79b775-cd2k7,Uid:b7525d7b-5fbb-49b2-8e97-fedba39aaec1,Namespace:calico-system,Attempt:0,} returns sandbox id \"902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a\"" Mar 14 00:25:12.485622 containerd[1710]: time="2026-03-14T00:25:12.485384500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:25:13.869508 systemd-networkd[1514]: vxlan.calico: Gained IPv6LL Mar 14 00:25:13.877362 containerd[1710]: time="2026-03-14T00:25:13.877312816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:13.880317 containerd[1710]: time="2026-03-14T00:25:13.880189758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:25:13.883208 containerd[1710]: time="2026-03-14T00:25:13.883137200Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:13.894085 containerd[1710]: time="2026-03-14T00:25:13.894017858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:13.895091 containerd[1710]: time="2026-03-14T00:25:13.894838569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.409411569s" Mar 14 00:25:13.895091 containerd[1710]: time="2026-03-14T00:25:13.894881670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:25:13.903135 containerd[1710]: time="2026-03-14T00:25:13.903085689Z" level=info msg="CreateContainer within sandbox \"902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:25:13.942873 containerd[1710]: time="2026-03-14T00:25:13.942821563Z" level=info msg="CreateContainer within sandbox \"902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"dec5f7b73598dc21dca019274a9d0c360c29a577b6b4732728d73fda91c12695\"" Mar 14 00:25:13.943794 containerd[1710]: time="2026-03-14T00:25:13.943726076Z" level=info msg="StartContainer for \"dec5f7b73598dc21dca019274a9d0c360c29a577b6b4732728d73fda91c12695\"" Mar 14 00:25:13.984387 systemd[1]: Started cri-containerd-dec5f7b73598dc21dca019274a9d0c360c29a577b6b4732728d73fda91c12695.scope - libcontainer container dec5f7b73598dc21dca019274a9d0c360c29a577b6b4732728d73fda91c12695. Mar 14 00:25:14.031403 containerd[1710]: time="2026-03-14T00:25:14.030463129Z" level=info msg="StartContainer for \"dec5f7b73598dc21dca019274a9d0c360c29a577b6b4732728d73fda91c12695\" returns successfully" Mar 14 00:25:14.034729 containerd[1710]: time="2026-03-14T00:25:14.034627890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:25:14.061461 systemd-networkd[1514]: cali3f47e00a125: Gained IPv6LL Mar 14 00:25:15.966625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874674464.mount: Deactivated successfully. Mar 14 00:25:16.022004 containerd[1710]: time="2026-03-14T00:25:16.021951459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:16.024545 containerd[1710]: time="2026-03-14T00:25:16.024468605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:25:16.028109 containerd[1710]: time="2026-03-14T00:25:16.028047770Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:16.032439 containerd[1710]: time="2026-03-14T00:25:16.032375649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:16.033335 containerd[1710]: time="2026-03-14T00:25:16.033157064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.998473374s" Mar 14 00:25:16.033335 containerd[1710]: time="2026-03-14T00:25:16.033199365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:25:16.041024 containerd[1710]: time="2026-03-14T00:25:16.040980407Z" level=info msg="CreateContainer within sandbox \"902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:25:16.077937 containerd[1710]: time="2026-03-14T00:25:16.077878882Z" level=info msg="CreateContainer within sandbox \"902d03b82ab166ac20f031c9bb0255830f9ce31f8c2ff2237924a020c2fe019a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"de6fbf37241eff05b8b43ef4525a00f308ab6b7a29888bd2e6ff3d67e8869a32\"" Mar 14 00:25:16.080153 containerd[1710]: time="2026-03-14T00:25:16.078590495Z" level=info msg="StartContainer for \"de6fbf37241eff05b8b43ef4525a00f308ab6b7a29888bd2e6ff3d67e8869a32\"" Mar 14 00:25:16.114391 systemd[1]: Started cri-containerd-de6fbf37241eff05b8b43ef4525a00f308ab6b7a29888bd2e6ff3d67e8869a32.scope - libcontainer container de6fbf37241eff05b8b43ef4525a00f308ab6b7a29888bd2e6ff3d67e8869a32. Mar 14 00:25:16.163797 containerd[1710]: time="2026-03-14T00:25:16.163746153Z" level=info msg="StartContainer for \"de6fbf37241eff05b8b43ef4525a00f308ab6b7a29888bd2e6ff3d67e8869a32\" returns successfully" Mar 14 00:25:16.690264 kubelet[3276]: I0314 00:25:16.688745 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-599f79b775-cd2k7" podStartSLOduration=2.139515868 podStartE2EDuration="5.688722657s" podCreationTimestamp="2026-03-14 00:25:11 +0000 UTC" firstStartedPulling="2026-03-14 00:25:12.484941493 +0000 UTC m=+54.254617371" lastFinishedPulling="2026-03-14 00:25:16.034148382 +0000 UTC m=+57.803824160" observedRunningTime="2026-03-14 00:25:16.687780539 +0000 UTC m=+58.457456417" watchObservedRunningTime="2026-03-14 00:25:16.688722657 +0000 UTC m=+58.458398435" Mar 14 00:25:18.359976 containerd[1710]: time="2026-03-14T00:25:18.359912729Z" level=info msg="StopPodSandbox for \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\"" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.396 [WARNING][4995] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.396 [INFO][4995] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.396 [INFO][4995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" iface="eth0" netns="" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.396 [INFO][4995] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.396 [INFO][4995] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.417 [INFO][5005] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.417 [INFO][5005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.417 [INFO][5005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.424 [WARNING][5005] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.424 [INFO][5005] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.426 [INFO][5005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:18.428288 containerd[1710]: 2026-03-14 00:25:18.427 [INFO][4995] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.429362 containerd[1710]: time="2026-03-14T00:25:18.428377382Z" level=info msg="TearDown network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\" successfully" Mar 14 00:25:18.429362 containerd[1710]: time="2026-03-14T00:25:18.428412183Z" level=info msg="StopPodSandbox for \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\" returns successfully" Mar 14 00:25:18.429362 containerd[1710]: time="2026-03-14T00:25:18.429241598Z" level=info msg="RemovePodSandbox for \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\"" Mar 14 00:25:18.429362 containerd[1710]: time="2026-03-14T00:25:18.429291199Z" level=info msg="Forcibly stopping sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\"" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.464 [WARNING][5019] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.465 [INFO][5019] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.465 [INFO][5019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" iface="eth0" netns="" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.465 [INFO][5019] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.465 [INFO][5019] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.485 [INFO][5027] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.486 [INFO][5027] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.486 [INFO][5027] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.492 [WARNING][5027] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.492 [INFO][5027] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" HandleID="k8s-pod-network.e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Workload="ci--4081.3.6--n--2b39e14e44-k8s-whisker--b9d96df74--gq7fr-eth0" Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.493 [INFO][5027] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:18.495911 containerd[1710]: 2026-03-14 00:25:18.494 [INFO][5019] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007" Mar 14 00:25:18.495911 containerd[1710]: time="2026-03-14T00:25:18.495888117Z" level=info msg="TearDown network for sandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\" successfully" Mar 14 00:25:18.510457 containerd[1710]: time="2026-03-14T00:25:18.510410883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:18.510603 containerd[1710]: time="2026-03-14T00:25:18.510489084Z" level=info msg="RemovePodSandbox \"e95ae0aa41eb7fa5bae86e411f615751fc727e39718a5c3a3a95f9af01caf007\" returns successfully" Mar 14 00:25:20.373205 containerd[1710]: time="2026-03-14T00:25:20.372842154Z" level=info msg="StopPodSandbox for \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\"" Mar 14 00:25:20.374250 containerd[1710]: time="2026-03-14T00:25:20.374003975Z" level=info msg="StopPodSandbox for \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\"" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.440 [INFO][5051] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.441 [INFO][5051] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" iface="eth0" netns="/var/run/netns/cni-073e6d38-c70d-2315-ee75-659dde0b148c" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.441 [INFO][5051] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" iface="eth0" netns="/var/run/netns/cni-073e6d38-c70d-2315-ee75-659dde0b148c" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.441 [INFO][5051] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" iface="eth0" netns="/var/run/netns/cni-073e6d38-c70d-2315-ee75-659dde0b148c" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.441 [INFO][5051] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.441 [INFO][5051] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.479 [INFO][5065] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.480 [INFO][5065] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.480 [INFO][5065] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.486 [WARNING][5065] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.486 [INFO][5065] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.487 [INFO][5065] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:20.490861 containerd[1710]: 2026-03-14 00:25:20.489 [INFO][5051] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:25:20.495508 containerd[1710]: time="2026-03-14T00:25:20.495344295Z" level=info msg="TearDown network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\" successfully" Mar 14 00:25:20.495508 containerd[1710]: time="2026-03-14T00:25:20.495389996Z" level=info msg="StopPodSandbox for \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\" returns successfully" Mar 14 00:25:20.497316 systemd[1]: run-netns-cni\x2d073e6d38\x2dc70d\x2d2315\x2dee75\x2d659dde0b148c.mount: Deactivated successfully. Mar 14 00:25:20.498615 containerd[1710]: time="2026-03-14T00:25:20.497857041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-zk5wg,Uid:d1646e2c-7985-4065-908a-efcf39003c75,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.446 [INFO][5052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.447 [INFO][5052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" iface="eth0" netns="/var/run/netns/cni-f831ca07-3caa-1cd0-d52f-10ac6d52df22" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.448 [INFO][5052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" iface="eth0" netns="/var/run/netns/cni-f831ca07-3caa-1cd0-d52f-10ac6d52df22" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.449 [INFO][5052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" iface="eth0" netns="/var/run/netns/cni-f831ca07-3caa-1cd0-d52f-10ac6d52df22" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.449 [INFO][5052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.449 [INFO][5052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.479 [INFO][5070] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.480 [INFO][5070] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.487 [INFO][5070] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.498 [WARNING][5070] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.498 [INFO][5070] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.501 [INFO][5070] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:20.505393 containerd[1710]: 2026-03-14 00:25:20.502 [INFO][5052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:25:20.507830 containerd[1710]: time="2026-03-14T00:25:20.506349996Z" level=info msg="TearDown network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\" successfully" Mar 14 00:25:20.507830 containerd[1710]: time="2026-03-14T00:25:20.506386697Z" level=info msg="StopPodSandbox for \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\" returns successfully" Mar 14 00:25:20.509686 systemd[1]: run-netns-cni\x2df831ca07\x2d3caa\x2d1cd0\x2dd52f\x2d10ac6d52df22.mount: Deactivated successfully. Mar 14 00:25:20.511655 containerd[1710]: time="2026-03-14T00:25:20.511588092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j82zn,Uid:66e0aa2f-87cf-44bf-9449-a14d5b459508,Namespace:kube-system,Attempt:1,}" Mar 14 00:25:20.679555 systemd-networkd[1514]: calif2625cb4e28: Link UP Mar 14 00:25:20.683754 systemd-networkd[1514]: calif2625cb4e28: Gained carrier Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.589 [INFO][5078] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0 goldmane-5b85766d88- calico-system d1646e2c-7985-4065-908a-efcf39003c75 977 0 2026-03-14 00:24:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 goldmane-5b85766d88-zk5wg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif2625cb4e28 [] [] }} ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.589 [INFO][5078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.626 [INFO][5100] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" HandleID="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.637 [INFO][5100] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" HandleID="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"goldmane-5b85766d88-zk5wg", "timestamp":"2026-03-14 00:25:20.6268282 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.638 [INFO][5100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.638 [INFO][5100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.638 [INFO][5100] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.640 [INFO][5100] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.644 [INFO][5100] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.650 [INFO][5100] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.652 [INFO][5100] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.654 [INFO][5100] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.654 [INFO][5100] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.656 [INFO][5100] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4 Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.664 [INFO][5100] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.671 [INFO][5100] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.66/26] block=192.168.3.64/26 handle="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.672 [INFO][5100] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.66/26] handle="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.672 [INFO][5100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:20.713687 containerd[1710]: 2026-03-14 00:25:20.672 [INFO][5100] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.66/26] IPv6=[] ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" HandleID="k8s-pod-network.7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.716864 containerd[1710]: 2026-03-14 00:25:20.674 [INFO][5078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d1646e2c-7985-4065-908a-efcf39003c75", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"goldmane-5b85766d88-zk5wg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2625cb4e28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:20.716864 containerd[1710]: 2026-03-14 00:25:20.675 [INFO][5078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.66/32] ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.716864 containerd[1710]: 2026-03-14 00:25:20.675 [INFO][5078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2625cb4e28 ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.716864 containerd[1710]: 2026-03-14 00:25:20.684 [INFO][5078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.716864 containerd[1710]: 2026-03-14 00:25:20.688 [INFO][5078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d1646e2c-7985-4065-908a-efcf39003c75", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4", Pod:"goldmane-5b85766d88-zk5wg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2625cb4e28", MAC:"8e:e5:ce:ec:2f:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:20.716864 containerd[1710]: 2026-03-14 00:25:20.709 [INFO][5078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4" Namespace="calico-system" Pod="goldmane-5b85766d88-zk5wg" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:25:20.771727 containerd[1710]: time="2026-03-14T00:25:20.771593449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:20.776845 containerd[1710]: time="2026-03-14T00:25:20.775749325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:20.776845 containerd[1710]: time="2026-03-14T00:25:20.776681742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:20.777053 containerd[1710]: time="2026-03-14T00:25:20.776823144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:20.813929 systemd[1]: Started cri-containerd-7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4.scope - libcontainer container 7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4. Mar 14 00:25:20.823011 systemd-networkd[1514]: cali5033896f621: Link UP Mar 14 00:25:20.826512 systemd-networkd[1514]: cali5033896f621: Gained carrier Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.626 [INFO][5089] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0 coredns-674b8bbfcf- kube-system 66e0aa2f-87cf-44bf-9449-a14d5b459508 978 0 2026-03-14 00:24:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 coredns-674b8bbfcf-j82zn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5033896f621 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.626 [INFO][5089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.662 [INFO][5109] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" HandleID="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.670 [INFO][5109] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" HandleID="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef8a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"coredns-674b8bbfcf-j82zn", "timestamp":"2026-03-14 00:25:20.662728157 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000393080)} Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.670 [INFO][5109] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.672 [INFO][5109] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.672 [INFO][5109] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.741 [INFO][5109] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.757 [INFO][5109] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.766 [INFO][5109] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.768 [INFO][5109] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.773 [INFO][5109] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.773 [INFO][5109] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.779 [INFO][5109] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740 Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.787 [INFO][5109] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.802 [INFO][5109] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.67/26] block=192.168.3.64/26 handle="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.802 [INFO][5109] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.67/26] handle="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.802 [INFO][5109] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:20.862828 containerd[1710]: 2026-03-14 00:25:20.802 [INFO][5109] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.67/26] IPv6=[] ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" HandleID="k8s-pod-network.1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.864173 containerd[1710]: 2026-03-14 00:25:20.812 [INFO][5089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"66e0aa2f-87cf-44bf-9449-a14d5b459508", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"coredns-674b8bbfcf-j82zn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5033896f621", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:20.864173 containerd[1710]: 2026-03-14 00:25:20.813 [INFO][5089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.67/32] ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.864173 containerd[1710]: 2026-03-14 00:25:20.813 [INFO][5089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5033896f621 ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.864173 containerd[1710]: 2026-03-14 00:25:20.829 [INFO][5089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.864173 containerd[1710]: 2026-03-14 00:25:20.831 [INFO][5089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"66e0aa2f-87cf-44bf-9449-a14d5b459508", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740", Pod:"coredns-674b8bbfcf-j82zn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5033896f621", MAC:"72:58:0a:50:2d:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:20.864173 containerd[1710]: 2026-03-14 00:25:20.854 [INFO][5089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740" Namespace="kube-system" Pod="coredns-674b8bbfcf-j82zn" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:25:20.906478 containerd[1710]: time="2026-03-14T00:25:20.906127110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:20.906478 containerd[1710]: time="2026-03-14T00:25:20.906197611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:20.906478 containerd[1710]: time="2026-03-14T00:25:20.906219412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:20.907308 containerd[1710]: time="2026-03-14T00:25:20.906619819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:20.951426 systemd[1]: Started cri-containerd-1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740.scope - libcontainer container 1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740. Mar 14 00:25:20.954673 containerd[1710]: time="2026-03-14T00:25:20.954573096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-zk5wg,Uid:d1646e2c-7985-4065-908a-efcf39003c75,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4\"" Mar 14 00:25:20.957621 containerd[1710]: time="2026-03-14T00:25:20.957582951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:25:21.011380 containerd[1710]: time="2026-03-14T00:25:21.011347335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j82zn,Uid:66e0aa2f-87cf-44bf-9449-a14d5b459508,Namespace:kube-system,Attempt:1,} returns sandbox id \"1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740\"" Mar 14 00:25:21.021011 containerd[1710]: time="2026-03-14T00:25:21.020961911Z" level=info msg="CreateContainer within sandbox \"1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:25:21.057844 containerd[1710]: time="2026-03-14T00:25:21.057789084Z" level=info msg="CreateContainer within sandbox \"1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db1745e37795c92b8452ebe189cc01d135943423b527947f129b09d066c93dd7\"" Mar 14 00:25:21.058845 containerd[1710]: time="2026-03-14T00:25:21.058801403Z" level=info msg="StartContainer for \"db1745e37795c92b8452ebe189cc01d135943423b527947f129b09d066c93dd7\"" Mar 14 00:25:21.084441 systemd[1]: Started cri-containerd-db1745e37795c92b8452ebe189cc01d135943423b527947f129b09d066c93dd7.scope - libcontainer container db1745e37795c92b8452ebe189cc01d135943423b527947f129b09d066c93dd7. Mar 14 00:25:21.118903 containerd[1710]: time="2026-03-14T00:25:21.118434594Z" level=info msg="StartContainer for \"db1745e37795c92b8452ebe189cc01d135943423b527947f129b09d066c93dd7\" returns successfully" Mar 14 00:25:21.726903 kubelet[3276]: I0314 00:25:21.726827 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j82zn" podStartSLOduration=59.726804723 podStartE2EDuration="59.726804723s" podCreationTimestamp="2026-03-14 00:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:25:21.706476951 +0000 UTC m=+63.476152729" watchObservedRunningTime="2026-03-14 00:25:21.726804723 +0000 UTC m=+63.496480601" Mar 14 00:25:22.125556 systemd-networkd[1514]: cali5033896f621: Gained IPv6LL Mar 14 00:25:22.383449 containerd[1710]: time="2026-03-14T00:25:22.382449818Z" level=info msg="StopPodSandbox for \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\"" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.447 [INFO][5288] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.447 [INFO][5288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" iface="eth0" netns="/var/run/netns/cni-22dfaea0-2e53-b39c-1282-8600c999d1bb" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.448 [INFO][5288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" iface="eth0" netns="/var/run/netns/cni-22dfaea0-2e53-b39c-1282-8600c999d1bb" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.449 [INFO][5288] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" iface="eth0" netns="/var/run/netns/cni-22dfaea0-2e53-b39c-1282-8600c999d1bb" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.449 [INFO][5288] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.449 [INFO][5288] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.489 [INFO][5295] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.489 [INFO][5295] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.489 [INFO][5295] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.495 [WARNING][5295] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.495 [INFO][5295] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.496 [INFO][5295] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:22.499689 containerd[1710]: 2026-03-14 00:25:22.498 [INFO][5288] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:25:22.502041 containerd[1710]: time="2026-03-14T00:25:22.501982804Z" level=info msg="TearDown network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\" successfully" Mar 14 00:25:22.502183 containerd[1710]: time="2026-03-14T00:25:22.502050906Z" level=info msg="StopPodSandbox for \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\" returns successfully" Mar 14 00:25:22.504499 containerd[1710]: time="2026-03-14T00:25:22.504454850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cb8d5ccb-s59nv,Uid:9da3f336-22fe-472c-9456-c10265166894,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:22.505487 systemd[1]: run-netns-cni\x2d22dfaea0\x2d2e53\x2db39c\x2d1282\x2d8600c999d1bb.mount: Deactivated successfully. Mar 14 00:25:22.638015 systemd-networkd[1514]: calif2625cb4e28: Gained IPv6LL Mar 14 00:25:22.722773 systemd-networkd[1514]: calic94868ecc81: Link UP Mar 14 00:25:22.723602 systemd-networkd[1514]: calic94868ecc81: Gained carrier Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.602 [INFO][5301] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0 calico-kube-controllers-65cb8d5ccb- calico-system 9da3f336-22fe-472c-9456-c10265166894 1004 0 2026-03-14 00:24:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65cb8d5ccb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 calico-kube-controllers-65cb8d5ccb-s59nv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic94868ecc81 [] [] }} ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.602 [INFO][5301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.656 [INFO][5313] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" HandleID="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.667 [INFO][5313] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" HandleID="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"calico-kube-controllers-65cb8d5ccb-s59nv", "timestamp":"2026-03-14 00:25:22.656664134 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188840)} Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.667 [INFO][5313] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.667 [INFO][5313] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.667 [INFO][5313] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.671 [INFO][5313] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.679 [INFO][5313] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.685 [INFO][5313] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.693 [INFO][5313] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.696 [INFO][5313] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.696 [INFO][5313] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.698 [INFO][5313] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.708 [INFO][5313] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.717 [INFO][5313] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.68/26] block=192.168.3.64/26 handle="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.717 [INFO][5313] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.68/26] handle="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.717 [INFO][5313] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:22.745608 containerd[1710]: 2026-03-14 00:25:22.717 [INFO][5313] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.68/26] IPv6=[] ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" HandleID="k8s-pod-network.7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.748427 containerd[1710]: 2026-03-14 00:25:22.719 [INFO][5301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0", GenerateName:"calico-kube-controllers-65cb8d5ccb-", Namespace:"calico-system", SelfLink:"", UID:"9da3f336-22fe-472c-9456-c10265166894", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cb8d5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"calico-kube-controllers-65cb8d5ccb-s59nv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic94868ecc81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:22.748427 containerd[1710]: 2026-03-14 00:25:22.719 [INFO][5301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.68/32] ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.748427 containerd[1710]: 2026-03-14 00:25:22.720 [INFO][5301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic94868ecc81 ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.748427 containerd[1710]: 2026-03-14 00:25:22.724 [INFO][5301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.748427 containerd[1710]: 2026-03-14 00:25:22.724 [INFO][5301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0", GenerateName:"calico-kube-controllers-65cb8d5ccb-", Namespace:"calico-system", SelfLink:"", UID:"9da3f336-22fe-472c-9456-c10265166894", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cb8d5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c", Pod:"calico-kube-controllers-65cb8d5ccb-s59nv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic94868ecc81", MAC:"4e:66:2a:03:93:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:22.748427 containerd[1710]: 2026-03-14 00:25:22.742 [INFO][5301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c" Namespace="calico-system" Pod="calico-kube-controllers-65cb8d5ccb-s59nv" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:25:22.941302 containerd[1710]: time="2026-03-14T00:25:22.940079019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:22.942194 containerd[1710]: time="2026-03-14T00:25:22.941999754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:22.942194 containerd[1710]: time="2026-03-14T00:25:22.942029155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:22.942194 containerd[1710]: time="2026-03-14T00:25:22.942128856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:22.973416 systemd[1]: Started cri-containerd-7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c.scope - libcontainer container 7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c. Mar 14 00:25:23.030454 containerd[1710]: time="2026-03-14T00:25:23.030385571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65cb8d5ccb-s59nv,Uid:9da3f336-22fe-472c-9456-c10265166894,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c\"" Mar 14 00:25:23.372559 containerd[1710]: time="2026-03-14T00:25:23.372311826Z" level=info msg="StopPodSandbox for \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\"" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.442 [INFO][5412] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.442 [INFO][5412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" iface="eth0" netns="/var/run/netns/cni-3bea62a3-1cf8-2e26-1b78-646ae53832b8" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.443 [INFO][5412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" iface="eth0" netns="/var/run/netns/cni-3bea62a3-1cf8-2e26-1b78-646ae53832b8" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.443 [INFO][5412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" iface="eth0" netns="/var/run/netns/cni-3bea62a3-1cf8-2e26-1b78-646ae53832b8" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.443 [INFO][5412] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.444 [INFO][5412] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.478 [INFO][5419] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.478 [INFO][5419] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.478 [INFO][5419] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.489 [WARNING][5419] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.489 [INFO][5419] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.491 [INFO][5419] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:23.495391 containerd[1710]: 2026-03-14 00:25:23.493 [INFO][5412] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:25:23.497439 containerd[1710]: time="2026-03-14T00:25:23.496641201Z" level=info msg="TearDown network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\" successfully" Mar 14 00:25:23.497439 containerd[1710]: time="2026-03-14T00:25:23.496707602Z" level=info msg="StopPodSandbox for \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\" returns successfully" Mar 14 00:25:23.498214 containerd[1710]: time="2026-03-14T00:25:23.498179629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-m46mb,Uid:a119f615-9d76-4da3-99fe-5e5a8a9503aa,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:23.507102 systemd[1]: run-netns-cni\x2d3bea62a3\x2d1cf8\x2d2e26\x2d1b78\x2d646ae53832b8.mount: Deactivated successfully. Mar 14 00:25:23.726558 systemd-networkd[1514]: cali69b2e56aa94: Link UP Mar 14 00:25:23.729332 systemd-networkd[1514]: cali69b2e56aa94: Gained carrier Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.613 [INFO][5425] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0 calico-apiserver-8b99f8b54- calico-system a119f615-9d76-4da3-99fe-5e5a8a9503aa 1012 0 2026-03-14 00:24:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8b99f8b54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 calico-apiserver-8b99f8b54-m46mb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali69b2e56aa94 [] [] }} ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.613 [INFO][5425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.655 [INFO][5437] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" HandleID="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.666 [INFO][5437] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" HandleID="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"calico-apiserver-8b99f8b54-m46mb", "timestamp":"2026-03-14 00:25:23.65565681 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ed1e0)} Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.666 [INFO][5437] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.666 [INFO][5437] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.667 [INFO][5437] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.670 [INFO][5437] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.678 [INFO][5437] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.684 [INFO][5437] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.687 [INFO][5437] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.690 [INFO][5437] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.690 [INFO][5437] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.691 [INFO][5437] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5 Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.698 [INFO][5437] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.712 [INFO][5437] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.69/26] block=192.168.3.64/26 handle="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.712 [INFO][5437] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.69/26] handle="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.712 [INFO][5437] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:23.756670 containerd[1710]: 2026-03-14 00:25:23.712 [INFO][5437] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.69/26] IPv6=[] ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" HandleID="k8s-pod-network.52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.757686 containerd[1710]: 2026-03-14 00:25:23.715 [INFO][5425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"a119f615-9d76-4da3-99fe-5e5a8a9503aa", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"calico-apiserver-8b99f8b54-m46mb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali69b2e56aa94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:23.757686 containerd[1710]: 2026-03-14 00:25:23.715 [INFO][5425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.69/32] ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.757686 containerd[1710]: 2026-03-14 00:25:23.715 [INFO][5425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69b2e56aa94 ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.757686 containerd[1710]: 2026-03-14 00:25:23.731 [INFO][5425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.757686 containerd[1710]: 2026-03-14 00:25:23.733 [INFO][5425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"a119f615-9d76-4da3-99fe-5e5a8a9503aa", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5", Pod:"calico-apiserver-8b99f8b54-m46mb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali69b2e56aa94", MAC:"36:8b:2f:49:ad:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:23.757686 containerd[1710]: 2026-03-14 00:25:23.753 [INFO][5425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-m46mb" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:25:23.815869 containerd[1710]: time="2026-03-14T00:25:23.813541845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:23.815869 containerd[1710]: time="2026-03-14T00:25:23.813643146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:23.815869 containerd[1710]: time="2026-03-14T00:25:23.813666247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:23.815869 containerd[1710]: time="2026-03-14T00:25:23.813807649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:23.845722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150106725.mount: Deactivated successfully. Mar 14 00:25:23.857624 systemd[1]: Started cri-containerd-52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5.scope - libcontainer container 52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5. Mar 14 00:25:23.918860 containerd[1710]: time="2026-03-14T00:25:23.918707466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-m46mb,Uid:a119f615-9d76-4da3-99fe-5e5a8a9503aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5\"" Mar 14 00:25:24.373535 containerd[1710]: time="2026-03-14T00:25:24.373180402Z" level=info msg="StopPodSandbox for \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\"" Mar 14 00:25:24.375473 containerd[1710]: time="2026-03-14T00:25:24.375436039Z" level=info msg="StopPodSandbox for \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\"" Mar 14 00:25:24.376306 containerd[1710]: time="2026-03-14T00:25:24.376282453Z" level=info msg="StopPodSandbox for \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\"" Mar 14 00:25:24.392479 containerd[1710]: time="2026-03-14T00:25:24.391705205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:24.394975 containerd[1710]: time="2026-03-14T00:25:24.394931858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:25:24.400831 containerd[1710]: time="2026-03-14T00:25:24.400797554Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:24.412764 containerd[1710]: time="2026-03-14T00:25:24.412720349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:24.414322 containerd[1710]: time="2026-03-14T00:25:24.414282375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.45655312s" Mar 14 00:25:24.414487 containerd[1710]: time="2026-03-14T00:25:24.414467878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:25:24.420083 containerd[1710]: time="2026-03-14T00:25:24.419820865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:25:24.426958 containerd[1710]: time="2026-03-14T00:25:24.426685077Z" level=info msg="CreateContainer within sandbox \"7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:25:24.478975 containerd[1710]: time="2026-03-14T00:25:24.478670228Z" level=info msg="CreateContainer within sandbox \"7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44\"" Mar 14 00:25:24.480545 containerd[1710]: time="2026-03-14T00:25:24.479738946Z" level=info msg="StartContainer for \"003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44\"" Mar 14 00:25:24.554266 systemd[1]: Started cri-containerd-003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44.scope - libcontainer container 003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44. Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.498 [INFO][5532] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.498 [INFO][5532] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" iface="eth0" netns="/var/run/netns/cni-3008c454-e75c-72f7-cc1b-726421badfbb" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.499 [INFO][5532] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" iface="eth0" netns="/var/run/netns/cni-3008c454-e75c-72f7-cc1b-726421badfbb" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.506 [INFO][5532] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" iface="eth0" netns="/var/run/netns/cni-3008c454-e75c-72f7-cc1b-726421badfbb" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.506 [INFO][5532] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.506 [INFO][5532] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.607 [INFO][5566] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.608 [INFO][5566] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.610 [INFO][5566] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.625 [WARNING][5566] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.625 [INFO][5566] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.628 [INFO][5566] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:24.636436 containerd[1710]: 2026-03-14 00:25:24.631 [INFO][5532] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:25:24.644980 containerd[1710]: time="2026-03-14T00:25:24.642432708Z" level=info msg="TearDown network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\" successfully" Mar 14 00:25:24.644980 containerd[1710]: time="2026-03-14T00:25:24.642474408Z" level=info msg="StopPodSandbox for \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\" returns successfully" Mar 14 00:25:24.644378 systemd[1]: run-netns-cni\x2d3008c454\x2de75c\x2d72f7\x2dcc1b\x2d726421badfbb.mount: Deactivated successfully. Mar 14 00:25:24.648683 containerd[1710]: time="2026-03-14T00:25:24.648011299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xrtx,Uid:c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.536 [INFO][5536] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.539 [INFO][5536] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" iface="eth0" netns="/var/run/netns/cni-e37bf404-30d2-08d2-3d88-85f209b636e0" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.543 [INFO][5536] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" iface="eth0" netns="/var/run/netns/cni-e37bf404-30d2-08d2-3d88-85f209b636e0" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.545 [INFO][5536] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" iface="eth0" netns="/var/run/netns/cni-e37bf404-30d2-08d2-3d88-85f209b636e0" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.545 [INFO][5536] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.545 [INFO][5536] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.625 [INFO][5577] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.626 [INFO][5577] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.628 [INFO][5577] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.644 [WARNING][5577] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.644 [INFO][5577] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.646 [INFO][5577] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:24.656838 containerd[1710]: 2026-03-14 00:25:24.650 [INFO][5536] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:25:24.658069 containerd[1710]: time="2026-03-14T00:25:24.657598356Z" level=info msg="TearDown network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\" successfully" Mar 14 00:25:24.658069 containerd[1710]: time="2026-03-14T00:25:24.657629056Z" level=info msg="StopPodSandbox for \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\" returns successfully" Mar 14 00:25:24.663761 systemd[1]: run-netns-cni\x2de37bf404\x2d30d2\x2d08d2\x2d3d88\x2d85f209b636e0.mount: Deactivated successfully. Mar 14 00:25:24.665737 containerd[1710]: time="2026-03-14T00:25:24.664197664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbd56,Uid:e9f998d1-a481-416b-8656-d361e4f465a0,Namespace:kube-system,Attempt:1,}" Mar 14 00:25:24.682340 containerd[1710]: time="2026-03-14T00:25:24.681988655Z" level=info msg="StartContainer for \"003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44\" returns successfully" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.530 [INFO][5533] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.531 [INFO][5533] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" iface="eth0" netns="/var/run/netns/cni-5174bf70-4a4b-06c1-b6f3-678926a41419" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.543 [INFO][5533] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" iface="eth0" netns="/var/run/netns/cni-5174bf70-4a4b-06c1-b6f3-678926a41419" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.546 [INFO][5533] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" iface="eth0" netns="/var/run/netns/cni-5174bf70-4a4b-06c1-b6f3-678926a41419" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.546 [INFO][5533] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.546 [INFO][5533] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.651 [INFO][5576] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.653 [INFO][5576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.653 [INFO][5576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.673 [WARNING][5576] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.675 [INFO][5576] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.678 [INFO][5576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:24.686958 containerd[1710]: 2026-03-14 00:25:24.683 [INFO][5533] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:25:24.688554 containerd[1710]: time="2026-03-14T00:25:24.687306442Z" level=info msg="TearDown network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\" successfully" Mar 14 00:25:24.688554 containerd[1710]: time="2026-03-14T00:25:24.687910552Z" level=info msg="StopPodSandbox for \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\" returns successfully" Mar 14 00:25:24.692262 containerd[1710]: time="2026-03-14T00:25:24.691145605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-9bmxj,Uid:bd26410d-afb8-4e0e-9857-79a48d714e02,Namespace:calico-system,Attempt:1,}" Mar 14 00:25:24.697412 systemd[1]: run-netns-cni\x2d5174bf70\x2d4a4b\x2d06c1\x2db6f3\x2d678926a41419.mount: Deactivated successfully. Mar 14 00:25:24.749461 systemd-networkd[1514]: calic94868ecc81: Gained IPv6LL Mar 14 00:25:25.008437 systemd-networkd[1514]: cali69b2e56aa94: Gained IPv6LL Mar 14 00:25:25.015597 systemd-networkd[1514]: cali50cce80da6f: Link UP Mar 14 00:25:25.015871 systemd-networkd[1514]: cali50cce80da6f: Gained carrier Mar 14 00:25:25.050276 kubelet[3276]: I0314 00:25:25.050136 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-zk5wg" podStartSLOduration=47.588741578 podStartE2EDuration="51.050114079s" podCreationTimestamp="2026-03-14 00:24:34 +0000 UTC" firstStartedPulling="2026-03-14 00:25:20.956869638 +0000 UTC m=+62.726545416" lastFinishedPulling="2026-03-14 00:25:24.418242139 +0000 UTC m=+66.187917917" observedRunningTime="2026-03-14 00:25:24.735907437 +0000 UTC m=+66.505583215" watchObservedRunningTime="2026-03-14 00:25:25.050114079 +0000 UTC m=+66.819789957" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.829 [INFO][5617] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0 csi-node-driver- calico-system c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a 1022 0 2026-03-14 00:24:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 csi-node-driver-8xrtx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali50cce80da6f [] [] }} ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.831 [INFO][5617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.915 [INFO][5666] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" HandleID="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.929 [INFO][5666] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" HandleID="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"csi-node-driver-8xrtx", "timestamp":"2026-03-14 00:25:24.915004268 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00039a580)} Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.930 [INFO][5666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.930 [INFO][5666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.930 [INFO][5666] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.934 [INFO][5666] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.946 [INFO][5666] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.958 [INFO][5666] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.962 [INFO][5666] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.965 [INFO][5666] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.965 [INFO][5666] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.970 [INFO][5666] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8 Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.979 [INFO][5666] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.993 [INFO][5666] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.70/26] block=192.168.3.64/26 handle="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.993 [INFO][5666] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.70/26] handle="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.993 [INFO][5666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:25.055165 containerd[1710]: 2026-03-14 00:25:24.993 [INFO][5666] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.70/26] IPv6=[] ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" HandleID="k8s-pod-network.71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:25.057753 containerd[1710]: 2026-03-14 00:25:24.997 [INFO][5617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"csi-node-driver-8xrtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50cce80da6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:25.057753 containerd[1710]: 2026-03-14 00:25:24.999 [INFO][5617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.70/32] ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:25.057753 containerd[1710]: 2026-03-14 00:25:24.999 [INFO][5617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50cce80da6f ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:25.057753 containerd[1710]: 2026-03-14 00:25:25.026 [INFO][5617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:25.057753 containerd[1710]: 2026-03-14 00:25:25.030 [INFO][5617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8", Pod:"csi-node-driver-8xrtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50cce80da6f", MAC:"5e:ac:4f:30:03:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:25.057753 containerd[1710]: 2026-03-14 00:25:25.052 [INFO][5617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8" Namespace="calico-system" Pod="csi-node-driver-8xrtx" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:25:25.102944 systemd-networkd[1514]: cali7fe8e2da7d5: Link UP Mar 14 00:25:25.104887 systemd-networkd[1514]: cali7fe8e2da7d5: Gained carrier Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:24.914 [INFO][5651] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0 calico-apiserver-8b99f8b54- calico-system bd26410d-afb8-4e0e-9857-79a48d714e02 1024 0 2026-03-14 00:24:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8b99f8b54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 calico-apiserver-8b99f8b54-9bmxj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7fe8e2da7d5 [] [] }} ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:24.915 [INFO][5651] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:24.997 [INFO][5684] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" HandleID="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.023 [INFO][5684] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" HandleID="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e55b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"calico-apiserver-8b99f8b54-9bmxj", "timestamp":"2026-03-14 00:25:24.997850423 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000189080)} Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.024 [INFO][5684] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.024 [INFO][5684] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.024 [INFO][5684] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.034 [INFO][5684] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.055 [INFO][5684] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.064 [INFO][5684] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.067 [INFO][5684] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.070 [INFO][5684] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.071 [INFO][5684] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.073 [INFO][5684] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147 Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.082 [INFO][5684] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.093 [INFO][5684] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.71/26] block=192.168.3.64/26 handle="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.093 [INFO][5684] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.71/26] handle="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.094 [INFO][5684] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:25.132298 containerd[1710]: 2026-03-14 00:25:25.094 [INFO][5684] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.71/26] IPv6=[] ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" HandleID="k8s-pod-network.5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:25.133586 containerd[1710]: 2026-03-14 00:25:25.097 [INFO][5651] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"bd26410d-afb8-4e0e-9857-79a48d714e02", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"calico-apiserver-8b99f8b54-9bmxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7fe8e2da7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:25.133586 containerd[1710]: 2026-03-14 00:25:25.098 [INFO][5651] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.71/32] ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:25.133586 containerd[1710]: 2026-03-14 00:25:25.098 [INFO][5651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fe8e2da7d5 ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:25.133586 containerd[1710]: 2026-03-14 00:25:25.105 [INFO][5651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:25.133586 containerd[1710]: 2026-03-14 00:25:25.106 [INFO][5651] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"bd26410d-afb8-4e0e-9857-79a48d714e02", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147", Pod:"calico-apiserver-8b99f8b54-9bmxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7fe8e2da7d5", MAC:"46:73:09:97:52:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:25.133586 containerd[1710]: 2026-03-14 00:25:25.127 [INFO][5651] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147" Namespace="calico-system" Pod="calico-apiserver-8b99f8b54-9bmxj" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:25:25.145875 containerd[1710]: time="2026-03-14T00:25:25.143418105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:25.145875 containerd[1710]: time="2026-03-14T00:25:25.143479506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:25.145875 containerd[1710]: time="2026-03-14T00:25:25.143495607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:25.145875 containerd[1710]: time="2026-03-14T00:25:25.143589908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:25.182193 containerd[1710]: time="2026-03-14T00:25:25.182082338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:25.183617 containerd[1710]: time="2026-03-14T00:25:25.182160839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:25.183617 containerd[1710]: time="2026-03-14T00:25:25.182177239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:25.184468 containerd[1710]: time="2026-03-14T00:25:25.184418376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:25.196465 systemd[1]: Started cri-containerd-71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8.scope - libcontainer container 71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8. Mar 14 00:25:25.219454 systemd-networkd[1514]: cali12523172d58: Link UP Mar 14 00:25:25.219982 systemd-networkd[1514]: cali12523172d58: Gained carrier Mar 14 00:25:25.255004 systemd[1]: Started cri-containerd-5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147.scope - libcontainer container 5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147. Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:24.919 [INFO][5645] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0 coredns-674b8bbfcf- kube-system e9f998d1-a481-416b-8656-d361e4f465a0 1025 0 2026-03-14 00:24:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2b39e14e44 coredns-674b8bbfcf-pbd56 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12523172d58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:24.919 [INFO][5645] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.045 [INFO][5685] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" HandleID="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.062 [INFO][5685] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" HandleID="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003811f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2b39e14e44", "pod":"coredns-674b8bbfcf-pbd56", "timestamp":"2026-03-14 00:25:25.045352701 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2b39e14e44", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000416000)} Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.062 [INFO][5685] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.094 [INFO][5685] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.094 [INFO][5685] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2b39e14e44' Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.135 [INFO][5685] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.152 [INFO][5685] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.163 [INFO][5685] ipam/ipam.go 526: Trying affinity for 192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.167 [INFO][5685] ipam/ipam.go 160: Attempting to load block cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.171 [INFO][5685] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.171 [INFO][5685] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.175 [INFO][5685] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60 Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.186 [INFO][5685] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.204 [INFO][5685] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.3.72/26] block=192.168.3.64/26 handle="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.205 [INFO][5685] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.3.72/26] handle="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" host="ci-4081.3.6-n-2b39e14e44" Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.205 [INFO][5685] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:25:25.262481 containerd[1710]: 2026-03-14 00:25:25.205 [INFO][5685] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.3.72/26] IPv6=[] ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" HandleID="k8s-pod-network.1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:25.265896 containerd[1710]: 2026-03-14 00:25:25.212 [INFO][5645] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e9f998d1-a481-416b-8656-d361e4f465a0", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"", Pod:"coredns-674b8bbfcf-pbd56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12523172d58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:25.265896 containerd[1710]: 2026-03-14 00:25:25.212 [INFO][5645] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.72/32] ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:25.265896 containerd[1710]: 2026-03-14 00:25:25.212 [INFO][5645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12523172d58 ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:25.265896 containerd[1710]: 2026-03-14 00:25:25.220 [INFO][5645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:25.265896 containerd[1710]: 2026-03-14 00:25:25.222 [INFO][5645] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e9f998d1-a481-416b-8656-d361e4f465a0", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60", Pod:"coredns-674b8bbfcf-pbd56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12523172d58", MAC:"aa:fb:2f:3d:5b:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:25:25.265896 containerd[1710]: 2026-03-14 00:25:25.250 [INFO][5645] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbd56" WorkloadEndpoint="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:25:25.302921 containerd[1710]: time="2026-03-14T00:25:25.302842414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xrtx,Uid:c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a,Namespace:calico-system,Attempt:1,} returns sandbox id \"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8\"" Mar 14 00:25:25.324294 containerd[1710]: time="2026-03-14T00:25:25.322363933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:25.324294 containerd[1710]: time="2026-03-14T00:25:25.322427534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:25.324693 containerd[1710]: time="2026-03-14T00:25:25.322447235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:25.324693 containerd[1710]: time="2026-03-14T00:25:25.324630470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:25.358470 systemd[1]: Started cri-containerd-1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60.scope - libcontainer container 1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60. Mar 14 00:25:25.426976 containerd[1710]: time="2026-03-14T00:25:25.426850343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbd56,Uid:e9f998d1-a481-416b-8656-d361e4f465a0,Namespace:kube-system,Attempt:1,} returns sandbox id \"1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60\"" Mar 14 00:25:25.442355 containerd[1710]: time="2026-03-14T00:25:25.442176694Z" level=info msg="CreateContainer within sandbox \"1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:25:25.463894 containerd[1710]: time="2026-03-14T00:25:25.463584244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8b99f8b54-9bmxj,Uid:bd26410d-afb8-4e0e-9857-79a48d714e02,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147\"" Mar 14 00:25:25.497949 containerd[1710]: time="2026-03-14T00:25:25.497886105Z" level=info msg="CreateContainer within sandbox \"1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43c0b4740716f51c8ea3714b7cd7927003f9a8f3666276a412e6e34867afe5f5\"" Mar 14 00:25:25.499709 containerd[1710]: time="2026-03-14T00:25:25.499365330Z" level=info msg="StartContainer for \"43c0b4740716f51c8ea3714b7cd7927003f9a8f3666276a412e6e34867afe5f5\"" Mar 14 00:25:25.569393 systemd[1]: Started cri-containerd-43c0b4740716f51c8ea3714b7cd7927003f9a8f3666276a412e6e34867afe5f5.scope - libcontainer container 43c0b4740716f51c8ea3714b7cd7927003f9a8f3666276a412e6e34867afe5f5. Mar 14 00:25:25.600441 containerd[1710]: time="2026-03-14T00:25:25.600324382Z" level=info msg="StartContainer for \"43c0b4740716f51c8ea3714b7cd7927003f9a8f3666276a412e6e34867afe5f5\" returns successfully" Mar 14 00:25:25.744772 kubelet[3276]: I0314 00:25:25.744552 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pbd56" podStartSLOduration=63.744528141 podStartE2EDuration="1m3.744528141s" podCreationTimestamp="2026-03-14 00:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:25:25.741895298 +0000 UTC m=+67.511571076" watchObservedRunningTime="2026-03-14 00:25:25.744528141 +0000 UTC m=+67.514203919" Mar 14 00:25:26.477599 systemd-networkd[1514]: cali50cce80da6f: Gained IPv6LL Mar 14 00:25:26.502740 systemd[1]: run-containerd-runc-k8s.io-003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44-runc.3IOmdw.mount: Deactivated successfully. Mar 14 00:25:27.181934 systemd-networkd[1514]: cali7fe8e2da7d5: Gained IPv6LL Mar 14 00:25:27.182368 systemd-networkd[1514]: cali12523172d58: Gained IPv6LL Mar 14 00:25:27.779801 containerd[1710]: time="2026-03-14T00:25:27.779730743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:27.782451 containerd[1710]: time="2026-03-14T00:25:27.782385986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:25:27.785316 containerd[1710]: time="2026-03-14T00:25:27.785260233Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:27.789625 containerd[1710]: time="2026-03-14T00:25:27.789574804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:27.791092 containerd[1710]: time="2026-03-14T00:25:27.790297816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.370044644s" Mar 14 00:25:27.791092 containerd[1710]: time="2026-03-14T00:25:27.790346817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:25:27.792063 containerd[1710]: time="2026-03-14T00:25:27.792035944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:25:27.815083 containerd[1710]: time="2026-03-14T00:25:27.815043621Z" level=info msg="CreateContainer within sandbox \"7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:25:27.846028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561721035.mount: Deactivated successfully. Mar 14 00:25:27.853879 containerd[1710]: time="2026-03-14T00:25:27.853836155Z" level=info msg="CreateContainer within sandbox \"7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0f4b412afc0cd346fcc5306f0961ebab41be6def18810a906e69bd5359f4e12f\"" Mar 14 00:25:27.854431 containerd[1710]: time="2026-03-14T00:25:27.854328763Z" level=info msg="StartContainer for \"0f4b412afc0cd346fcc5306f0961ebab41be6def18810a906e69bd5359f4e12f\"" Mar 14 00:25:27.886388 systemd[1]: Started cri-containerd-0f4b412afc0cd346fcc5306f0961ebab41be6def18810a906e69bd5359f4e12f.scope - libcontainer container 0f4b412afc0cd346fcc5306f0961ebab41be6def18810a906e69bd5359f4e12f. Mar 14 00:25:27.935306 containerd[1710]: time="2026-03-14T00:25:27.934445374Z" level=info msg="StartContainer for \"0f4b412afc0cd346fcc5306f0961ebab41be6def18810a906e69bd5359f4e12f\" returns successfully" Mar 14 00:25:28.739148 kubelet[3276]: I0314 00:25:28.739086 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65cb8d5ccb-s59nv" podStartSLOduration=48.979654506 podStartE2EDuration="53.73906424s" podCreationTimestamp="2026-03-14 00:24:35 +0000 UTC" firstStartedPulling="2026-03-14 00:25:23.031753396 +0000 UTC m=+64.801429274" lastFinishedPulling="2026-03-14 00:25:27.79116323 +0000 UTC m=+69.560839008" observedRunningTime="2026-03-14 00:25:28.738664434 +0000 UTC m=+70.508340312" watchObservedRunningTime="2026-03-14 00:25:28.73906424 +0000 UTC m=+70.508740018" Mar 14 00:25:31.109533 containerd[1710]: time="2026-03-14T00:25:31.109473727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:31.113601 containerd[1710]: time="2026-03-14T00:25:31.113448292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:25:31.116968 containerd[1710]: time="2026-03-14T00:25:31.116898948Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:31.122287 containerd[1710]: time="2026-03-14T00:25:31.122251036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:31.123589 containerd[1710]: time="2026-03-14T00:25:31.123075549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.331003204s" Mar 14 00:25:31.123589 containerd[1710]: time="2026-03-14T00:25:31.123116050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:25:31.125352 containerd[1710]: time="2026-03-14T00:25:31.125212284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:25:31.131030 containerd[1710]: time="2026-03-14T00:25:31.130996179Z" level=info msg="CreateContainer within sandbox \"52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:25:31.169718 containerd[1710]: time="2026-03-14T00:25:31.169676812Z" level=info msg="CreateContainer within sandbox \"52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2fa2dd91382d649354a17e95c945e5d7ee0dc56bdecca0ecc9dba539ad644d14\"" Mar 14 00:25:31.170484 containerd[1710]: time="2026-03-14T00:25:31.170415924Z" level=info msg="StartContainer for \"2fa2dd91382d649354a17e95c945e5d7ee0dc56bdecca0ecc9dba539ad644d14\"" Mar 14 00:25:31.214393 systemd[1]: Started cri-containerd-2fa2dd91382d649354a17e95c945e5d7ee0dc56bdecca0ecc9dba539ad644d14.scope - libcontainer container 2fa2dd91382d649354a17e95c945e5d7ee0dc56bdecca0ecc9dba539ad644d14. Mar 14 00:25:31.257670 containerd[1710]: time="2026-03-14T00:25:31.257551950Z" level=info msg="StartContainer for \"2fa2dd91382d649354a17e95c945e5d7ee0dc56bdecca0ecc9dba539ad644d14\" returns successfully" Mar 14 00:25:32.770951 kubelet[3276]: I0314 00:25:32.768459 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8b99f8b54-m46mb" podStartSLOduration=51.568115971 podStartE2EDuration="58.768436189s" podCreationTimestamp="2026-03-14 00:24:34 +0000 UTC" firstStartedPulling="2026-03-14 00:25:23.923832849 +0000 UTC m=+65.693508627" lastFinishedPulling="2026-03-14 00:25:31.124153067 +0000 UTC m=+72.893828845" observedRunningTime="2026-03-14 00:25:31.757987578 +0000 UTC m=+73.527663456" watchObservedRunningTime="2026-03-14 00:25:32.768436189 +0000 UTC m=+74.538112067" Mar 14 00:25:32.805280 containerd[1710]: time="2026-03-14T00:25:32.805032633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:32.807857 containerd[1710]: time="2026-03-14T00:25:32.807805022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:25:32.811908 containerd[1710]: time="2026-03-14T00:25:32.811864404Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:32.816022 containerd[1710]: time="2026-03-14T00:25:32.815949887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:32.817128 containerd[1710]: time="2026-03-14T00:25:32.816665184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.691299897s" Mar 14 00:25:32.817128 containerd[1710]: time="2026-03-14T00:25:32.816707484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:25:32.818258 containerd[1710]: time="2026-03-14T00:25:32.817946979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:25:32.826710 containerd[1710]: time="2026-03-14T00:25:32.826456542Z" level=info msg="CreateContainer within sandbox \"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:25:32.869263 containerd[1710]: time="2026-03-14T00:25:32.868658563Z" level=info msg="CreateContainer within sandbox \"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1202a2c4d053983f159dd495f31e3a335d3aaa96dd879f37b04574c5d881eed1\"" Mar 14 00:25:32.869750 containerd[1710]: time="2026-03-14T00:25:32.869721859Z" level=info msg="StartContainer for \"1202a2c4d053983f159dd495f31e3a335d3aaa96dd879f37b04574c5d881eed1\"" Mar 14 00:25:32.922422 systemd[1]: Started cri-containerd-1202a2c4d053983f159dd495f31e3a335d3aaa96dd879f37b04574c5d881eed1.scope - libcontainer container 1202a2c4d053983f159dd495f31e3a335d3aaa96dd879f37b04574c5d881eed1. Mar 14 00:25:32.996120 containerd[1710]: time="2026-03-14T00:25:32.995982123Z" level=info msg="StartContainer for \"1202a2c4d053983f159dd495f31e3a335d3aaa96dd879f37b04574c5d881eed1\" returns successfully" Mar 14 00:25:33.137927 containerd[1710]: time="2026-03-14T00:25:33.136896025Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:33.139747 containerd[1710]: time="2026-03-14T00:25:33.139682213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:25:33.142122 containerd[1710]: time="2026-03-14T00:25:33.142087803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 324.103125ms" Mar 14 00:25:33.142246 containerd[1710]: time="2026-03-14T00:25:33.142126403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:25:33.144058 containerd[1710]: time="2026-03-14T00:25:33.143851095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:25:33.155471 containerd[1710]: time="2026-03-14T00:25:33.155433346Z" level=info msg="CreateContainer within sandbox \"5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:25:33.196197 containerd[1710]: time="2026-03-14T00:25:33.196150873Z" level=info msg="CreateContainer within sandbox \"5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d86a8d61904f7f1a6d9f29850c4a9f6da80af379a291b2e34193456e6188479b\"" Mar 14 00:25:33.197294 containerd[1710]: time="2026-03-14T00:25:33.196958870Z" level=info msg="StartContainer for \"d86a8d61904f7f1a6d9f29850c4a9f6da80af379a291b2e34193456e6188479b\"" Mar 14 00:25:33.230397 systemd[1]: Started cri-containerd-d86a8d61904f7f1a6d9f29850c4a9f6da80af379a291b2e34193456e6188479b.scope - libcontainer container d86a8d61904f7f1a6d9f29850c4a9f6da80af379a291b2e34193456e6188479b. Mar 14 00:25:33.278716 containerd[1710]: time="2026-03-14T00:25:33.278660623Z" level=info msg="StartContainer for \"d86a8d61904f7f1a6d9f29850c4a9f6da80af379a291b2e34193456e6188479b\" returns successfully" Mar 14 00:25:33.762340 kubelet[3276]: I0314 00:25:33.762261 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8b99f8b54-9bmxj" podStartSLOduration=52.087037585 podStartE2EDuration="59.762208371s" podCreationTimestamp="2026-03-14 00:24:34 +0000 UTC" firstStartedPulling="2026-03-14 00:25:25.467774913 +0000 UTC m=+67.237450691" lastFinishedPulling="2026-03-14 00:25:33.142945599 +0000 UTC m=+74.912621477" observedRunningTime="2026-03-14 00:25:33.761968472 +0000 UTC m=+75.531644250" watchObservedRunningTime="2026-03-14 00:25:33.762208371 +0000 UTC m=+75.531884249" Mar 14 00:25:34.750680 kubelet[3276]: I0314 00:25:34.750292 3276 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:25:34.918829 containerd[1710]: time="2026-03-14T00:25:34.918773697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:34.921848 containerd[1710]: time="2026-03-14T00:25:34.921632437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:25:34.925178 containerd[1710]: time="2026-03-14T00:25:34.925135786Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:34.929895 containerd[1710]: time="2026-03-14T00:25:34.929836252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:25:34.931253 containerd[1710]: time="2026-03-14T00:25:34.930574562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.786681067s" Mar 14 00:25:34.931253 containerd[1710]: time="2026-03-14T00:25:34.930617862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:25:34.941797 containerd[1710]: time="2026-03-14T00:25:34.941762618Z" level=info msg="CreateContainer within sandbox \"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:25:34.982253 containerd[1710]: time="2026-03-14T00:25:34.982200781Z" level=info msg="CreateContainer within sandbox \"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"41116eb17bdbcd3de0a89786cb4236714a4e96158d0b69e32d29c7c2ed4a0a87\"" Mar 14 00:25:34.983060 containerd[1710]: time="2026-03-14T00:25:34.982907290Z" level=info msg="StartContainer for \"41116eb17bdbcd3de0a89786cb4236714a4e96158d0b69e32d29c7c2ed4a0a87\"" Mar 14 00:25:35.024497 systemd[1]: Started cri-containerd-41116eb17bdbcd3de0a89786cb4236714a4e96158d0b69e32d29c7c2ed4a0a87.scope - libcontainer container 41116eb17bdbcd3de0a89786cb4236714a4e96158d0b69e32d29c7c2ed4a0a87. Mar 14 00:25:35.055302 containerd[1710]: time="2026-03-14T00:25:35.055149096Z" level=info msg="StartContainer for \"41116eb17bdbcd3de0a89786cb4236714a4e96158d0b69e32d29c7c2ed4a0a87\" returns successfully" Mar 14 00:25:35.768481 kubelet[3276]: I0314 00:25:35.768356 3276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8xrtx" podStartSLOduration=51.142013903 podStartE2EDuration="1m0.768332526s" podCreationTimestamp="2026-03-14 00:24:35 +0000 UTC" firstStartedPulling="2026-03-14 00:25:25.305127551 +0000 UTC m=+67.074803329" lastFinishedPulling="2026-03-14 00:25:34.931446174 +0000 UTC m=+76.701121952" observedRunningTime="2026-03-14 00:25:35.767306611 +0000 UTC m=+77.536982389" watchObservedRunningTime="2026-03-14 00:25:35.768332526 +0000 UTC m=+77.538008304" Mar 14 00:25:35.812380 kubelet[3276]: I0314 00:25:35.812113 3276 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:25:35.812380 kubelet[3276]: I0314 00:25:35.812165 3276 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:25:38.209351 kubelet[3276]: I0314 00:25:38.209031 3276 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:25:40.777037 systemd[1]: run-containerd-runc-k8s.io-dadefb1f9345166cf9a01bba4d448b593679e54c4d3b8487eedfdf00fd3ef56b-runc.htgDVr.mount: Deactivated successfully. Mar 14 00:26:04.558513 systemd[1]: Started sshd@7-10.200.8.29:22-10.200.16.10:39260.service - OpenSSH per-connection server daemon (10.200.16.10:39260). Mar 14 00:26:05.179592 sshd[6381]: Accepted publickey for core from 10.200.16.10 port 39260 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:05.181251 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:05.186175 systemd-logind[1694]: New session 10 of user core. Mar 14 00:26:05.192403 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:26:06.819211 sshd[6381]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:06.822745 systemd[1]: sshd@7-10.200.8.29:22-10.200.16.10:39260.service: Deactivated successfully. Mar 14 00:26:06.825174 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:26:06.827289 systemd-logind[1694]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:26:06.828530 systemd-logind[1694]: Removed session 10. Mar 14 00:26:11.937559 systemd[1]: Started sshd@8-10.200.8.29:22-10.200.16.10:57240.service - OpenSSH per-connection server daemon (10.200.16.10:57240). Mar 14 00:26:12.556296 sshd[6422]: Accepted publickey for core from 10.200.16.10 port 57240 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:12.557880 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:12.562839 systemd-logind[1694]: New session 11 of user core. Mar 14 00:26:12.568385 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:26:13.057310 sshd[6422]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:13.061279 systemd[1]: sshd@8-10.200.8.29:22-10.200.16.10:57240.service: Deactivated successfully. Mar 14 00:26:13.063545 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:26:13.064406 systemd-logind[1694]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:26:13.065567 systemd-logind[1694]: Removed session 11. Mar 14 00:26:18.167878 systemd[1]: Started sshd@9-10.200.8.29:22-10.200.16.10:57248.service - OpenSSH per-connection server daemon (10.200.16.10:57248). Mar 14 00:26:18.513811 containerd[1710]: time="2026-03-14T00:26:18.513771259Z" level=info msg="StopPodSandbox for \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\"" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.546 [WARNING][6447] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"66e0aa2f-87cf-44bf-9449-a14d5b459508", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740", Pod:"coredns-674b8bbfcf-j82zn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5033896f621", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.547 [INFO][6447] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.547 [INFO][6447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" iface="eth0" netns="" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.547 [INFO][6447] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.547 [INFO][6447] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.567 [INFO][6454] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.567 [INFO][6454] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.568 [INFO][6454] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.575 [WARNING][6454] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.575 [INFO][6454] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.576 [INFO][6454] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:18.578900 containerd[1710]: 2026-03-14 00:26:18.577 [INFO][6447] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.578900 containerd[1710]: time="2026-03-14T00:26:18.578749564Z" level=info msg="TearDown network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\" successfully" Mar 14 00:26:18.578900 containerd[1710]: time="2026-03-14T00:26:18.578780665Z" level=info msg="StopPodSandbox for \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\" returns successfully" Mar 14 00:26:18.580076 containerd[1710]: time="2026-03-14T00:26:18.579327373Z" level=info msg="RemovePodSandbox for \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\"" Mar 14 00:26:18.580076 containerd[1710]: time="2026-03-14T00:26:18.579362474Z" level=info msg="Forcibly stopping sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\"" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.613 [WARNING][6469] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"66e0aa2f-87cf-44bf-9449-a14d5b459508", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"1be4724d957352e8552aea20123688c3ea63c7306c4eddb95d2be88e223b7740", Pod:"coredns-674b8bbfcf-j82zn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5033896f621", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.614 [INFO][6469] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.614 [INFO][6469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" iface="eth0" netns="" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.614 [INFO][6469] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.614 [INFO][6469] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.635 [INFO][6477] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.636 [INFO][6477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.636 [INFO][6477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.643 [WARNING][6477] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.643 [INFO][6477] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" HandleID="k8s-pod-network.7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--j82zn-eth0" Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.644 [INFO][6477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:18.647114 containerd[1710]: 2026-03-14 00:26:18.645 [INFO][6469] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa" Mar 14 00:26:18.647805 containerd[1710]: time="2026-03-14T00:26:18.647174023Z" level=info msg="TearDown network for sandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\" successfully" Mar 14 00:26:18.660676 containerd[1710]: time="2026-03-14T00:26:18.660625531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:18.660906 containerd[1710]: time="2026-03-14T00:26:18.660712032Z" level=info msg="RemovePodSandbox \"7f13dcd1d4a4ba23590e27824922c039bcb6632e91eac04c43587d7ee3afcffa\" returns successfully" Mar 14 00:26:18.661313 containerd[1710]: time="2026-03-14T00:26:18.661279541Z" level=info msg="StopPodSandbox for \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\"" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.696 [WARNING][6491] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"a119f615-9d76-4da3-99fe-5e5a8a9503aa", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5", Pod:"calico-apiserver-8b99f8b54-m46mb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali69b2e56aa94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.696 [INFO][6491] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.696 [INFO][6491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" iface="eth0" netns="" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.696 [INFO][6491] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.696 [INFO][6491] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.720 [INFO][6499] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.720 [INFO][6499] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.720 [INFO][6499] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.727 [WARNING][6499] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.727 [INFO][6499] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.730 [INFO][6499] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:18.733279 containerd[1710]: 2026-03-14 00:26:18.731 [INFO][6491] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.735112 containerd[1710]: time="2026-03-14T00:26:18.733342956Z" level=info msg="TearDown network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\" successfully" Mar 14 00:26:18.735112 containerd[1710]: time="2026-03-14T00:26:18.733372556Z" level=info msg="StopPodSandbox for \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\" returns successfully" Mar 14 00:26:18.735112 containerd[1710]: time="2026-03-14T00:26:18.734356971Z" level=info msg="RemovePodSandbox for \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\"" Mar 14 00:26:18.735112 containerd[1710]: time="2026-03-14T00:26:18.734391172Z" level=info msg="Forcibly stopping sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\"" Mar 14 00:26:18.802057 sshd[6435]: Accepted publickey for core from 10.200.16.10 port 57248 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:18.804123 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:18.812075 systemd-logind[1694]: New session 12 of user core. Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.774 [WARNING][6513] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"a119f615-9d76-4da3-99fe-5e5a8a9503aa", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"52308ac69af8d3ca8eb46813e2609904b80af048e2b7296a8212f6ea73f1abb5", Pod:"calico-apiserver-8b99f8b54-m46mb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali69b2e56aa94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.775 [INFO][6513] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.775 [INFO][6513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" iface="eth0" netns="" Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.775 [INFO][6513] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.775 [INFO][6513] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.805 [INFO][6520] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.805 [INFO][6520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.806 [INFO][6520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.814 [WARNING][6520] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.814 [INFO][6520] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" HandleID="k8s-pod-network.799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--m46mb-eth0" Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.815 [INFO][6520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:18.818199 containerd[1710]: 2026-03-14 00:26:18.816 [INFO][6513] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d" Mar 14 00:26:18.818838 containerd[1710]: time="2026-03-14T00:26:18.818277569Z" level=info msg="TearDown network for sandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\" successfully" Mar 14 00:26:18.819046 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:26:18.831918 containerd[1710]: time="2026-03-14T00:26:18.831872480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:18.832023 containerd[1710]: time="2026-03-14T00:26:18.831964081Z" level=info msg="RemovePodSandbox \"799e2455f54f9cfa62e17d673171e9b2a55aa0557d952d92329f81e0168faf5d\" returns successfully" Mar 14 00:26:18.832739 containerd[1710]: time="2026-03-14T00:26:18.832453989Z" level=info msg="StopPodSandbox for \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\"" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.870 [WARNING][6535] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"bd26410d-afb8-4e0e-9857-79a48d714e02", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147", Pod:"calico-apiserver-8b99f8b54-9bmxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7fe8e2da7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.870 [INFO][6535] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.870 [INFO][6535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" iface="eth0" netns="" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.870 [INFO][6535] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.870 [INFO][6535] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.892 [INFO][6542] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.893 [INFO][6542] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.893 [INFO][6542] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.899 [WARNING][6542] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.899 [INFO][6542] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.900 [INFO][6542] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:18.903079 containerd[1710]: 2026-03-14 00:26:18.901 [INFO][6535] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.903940 containerd[1710]: time="2026-03-14T00:26:18.903123082Z" level=info msg="TearDown network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\" successfully" Mar 14 00:26:18.903940 containerd[1710]: time="2026-03-14T00:26:18.903150782Z" level=info msg="StopPodSandbox for \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\" returns successfully" Mar 14 00:26:18.904200 containerd[1710]: time="2026-03-14T00:26:18.904168598Z" level=info msg="RemovePodSandbox for \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\"" Mar 14 00:26:18.904302 containerd[1710]: time="2026-03-14T00:26:18.904210198Z" level=info msg="Forcibly stopping sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\"" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.939 [WARNING][6557] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0", GenerateName:"calico-apiserver-8b99f8b54-", Namespace:"calico-system", SelfLink:"", UID:"bd26410d-afb8-4e0e-9857-79a48d714e02", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8b99f8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"5d4f81c88aa2bbe5bc45caf85ee377a549c2bdfed7e700bd1b464f51a0723147", Pod:"calico-apiserver-8b99f8b54-9bmxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7fe8e2da7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.939 [INFO][6557] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.939 [INFO][6557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" iface="eth0" netns="" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.939 [INFO][6557] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.939 [INFO][6557] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.960 [INFO][6564] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.960 [INFO][6564] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.961 [INFO][6564] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.966 [WARNING][6564] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.966 [INFO][6564] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" HandleID="k8s-pod-network.c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--apiserver--8b99f8b54--9bmxj-eth0" Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.968 [INFO][6564] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:18.970675 containerd[1710]: 2026-03-14 00:26:18.969 [INFO][6557] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264" Mar 14 00:26:18.970675 containerd[1710]: time="2026-03-14T00:26:18.970656226Z" level=info msg="TearDown network for sandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\" successfully" Mar 14 00:26:18.983006 containerd[1710]: time="2026-03-14T00:26:18.982922416Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:18.983363 containerd[1710]: time="2026-03-14T00:26:18.983024217Z" level=info msg="RemovePodSandbox \"c413dbd9caa6f453a82ef45eabf97af950ee53acdcc410db8e20d68f839de264\" returns successfully" Mar 14 00:26:18.983650 containerd[1710]: time="2026-03-14T00:26:18.983620927Z" level=info msg="StopPodSandbox for \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\"" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.018 [WARNING][6578] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d1646e2c-7985-4065-908a-efcf39003c75", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4", Pod:"goldmane-5b85766d88-zk5wg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2625cb4e28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.018 [INFO][6578] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.018 [INFO][6578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" iface="eth0" netns="" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.018 [INFO][6578] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.018 [INFO][6578] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.039 [INFO][6585] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.039 [INFO][6585] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.039 [INFO][6585] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.045 [WARNING][6585] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.045 [INFO][6585] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.046 [INFO][6585] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.049451 containerd[1710]: 2026-03-14 00:26:19.048 [INFO][6578] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.049451 containerd[1710]: time="2026-03-14T00:26:19.049355243Z" level=info msg="TearDown network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\" successfully" Mar 14 00:26:19.049451 containerd[1710]: time="2026-03-14T00:26:19.049377544Z" level=info msg="StopPodSandbox for \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\" returns successfully" Mar 14 00:26:19.050408 containerd[1710]: time="2026-03-14T00:26:19.050181956Z" level=info msg="RemovePodSandbox for \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\"" Mar 14 00:26:19.050408 containerd[1710]: time="2026-03-14T00:26:19.050310558Z" level=info msg="Forcibly stopping sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\"" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.085 [WARNING][6599] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d1646e2c-7985-4065-908a-efcf39003c75", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"7ae04a311507035562919c967475861907bcb252c78949e6f919a784585378d4", Pod:"goldmane-5b85766d88-zk5wg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2625cb4e28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.085 [INFO][6599] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.085 [INFO][6599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" iface="eth0" netns="" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.085 [INFO][6599] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.086 [INFO][6599] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.106 [INFO][6606] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.106 [INFO][6606] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.106 [INFO][6606] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.113 [WARNING][6606] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.113 [INFO][6606] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" HandleID="k8s-pod-network.adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Workload="ci--4081.3.6--n--2b39e14e44-k8s-goldmane--5b85766d88--zk5wg-eth0" Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.114 [INFO][6606] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.117545 containerd[1710]: 2026-03-14 00:26:19.116 [INFO][6599] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2" Mar 14 00:26:19.119295 containerd[1710]: time="2026-03-14T00:26:19.117511297Z" level=info msg="TearDown network for sandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\" successfully" Mar 14 00:26:19.127180 containerd[1710]: time="2026-03-14T00:26:19.127141646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:19.127396 containerd[1710]: time="2026-03-14T00:26:19.127368550Z" level=info msg="RemovePodSandbox \"adbceb37310bbf3f3841445aac4e28fcc960070f615dd6d4958ca81a4ff19fc2\" returns successfully" Mar 14 00:26:19.128005 containerd[1710]: time="2026-03-14T00:26:19.127975059Z" level=info msg="StopPodSandbox for \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\"" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.185 [WARNING][6620] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8", Pod:"csi-node-driver-8xrtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50cce80da6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.185 [INFO][6620] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.185 [INFO][6620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" iface="eth0" netns="" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.185 [INFO][6620] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.185 [INFO][6620] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.219 [INFO][6636] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.220 [INFO][6636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.220 [INFO][6636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.230 [WARNING][6636] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.230 [INFO][6636] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.232 [INFO][6636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.240155 containerd[1710]: 2026-03-14 00:26:19.236 [INFO][6620] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.240155 containerd[1710]: time="2026-03-14T00:26:19.239401883Z" level=info msg="TearDown network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\" successfully" Mar 14 00:26:19.240155 containerd[1710]: time="2026-03-14T00:26:19.239429383Z" level=info msg="StopPodSandbox for \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\" returns successfully" Mar 14 00:26:19.245619 containerd[1710]: time="2026-03-14T00:26:19.242795235Z" level=info msg="RemovePodSandbox for \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\"" Mar 14 00:26:19.245619 containerd[1710]: time="2026-03-14T00:26:19.242838636Z" level=info msg="Forcibly stopping sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\"" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.298 [WARNING][6650] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3cb8870-8b1c-43cc-a3a0-f78adc52bc6a", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"71ee1f600c3aece1399f3af7b7c32d9619154b94298d1dea870f3e7f5d02eef8", Pod:"csi-node-driver-8xrtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50cce80da6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.298 [INFO][6650] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.298 [INFO][6650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" iface="eth0" netns="" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.298 [INFO][6650] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.298 [INFO][6650] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.318 [INFO][6657] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.318 [INFO][6657] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.318 [INFO][6657] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.324 [WARNING][6657] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.324 [INFO][6657] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" HandleID="k8s-pod-network.d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Workload="ci--4081.3.6--n--2b39e14e44-k8s-csi--node--driver--8xrtx-eth0" Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.326 [INFO][6657] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.328579 containerd[1710]: 2026-03-14 00:26:19.327 [INFO][6650] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e" Mar 14 00:26:19.329260 containerd[1710]: time="2026-03-14T00:26:19.328627263Z" level=info msg="TearDown network for sandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\" successfully" Mar 14 00:26:19.342879 containerd[1710]: time="2026-03-14T00:26:19.342755481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:19.343147 containerd[1710]: time="2026-03-14T00:26:19.342948284Z" level=info msg="RemovePodSandbox \"d5156e622a2b9d13d3b2a3720623a3ca721c14e4649fdccd8daa54c29312055e\" returns successfully" Mar 14 00:26:19.344001 containerd[1710]: time="2026-03-14T00:26:19.343962200Z" level=info msg="StopPodSandbox for \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\"" Mar 14 00:26:19.366829 sshd[6435]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:19.372974 systemd[1]: sshd@9-10.200.8.29:22-10.200.16.10:57248.service: Deactivated successfully. Mar 14 00:26:19.375398 systemd-logind[1694]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:26:19.376932 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:26:19.378955 systemd-logind[1694]: Removed session 12. Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.383 [WARNING][6671] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e9f998d1-a481-416b-8656-d361e4f465a0", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60", Pod:"coredns-674b8bbfcf-pbd56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12523172d58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.384 [INFO][6671] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.384 [INFO][6671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" iface="eth0" netns="" Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.384 [INFO][6671] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.384 [INFO][6671] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.405 [INFO][6680] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.405 [INFO][6680] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.405 [INFO][6680] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.412 [WARNING][6680] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.412 [INFO][6680] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.413 [INFO][6680] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.416063 containerd[1710]: 2026-03-14 00:26:19.414 [INFO][6671] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.416724 containerd[1710]: time="2026-03-14T00:26:19.416116316Z" level=info msg="TearDown network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\" successfully" Mar 14 00:26:19.416724 containerd[1710]: time="2026-03-14T00:26:19.416147816Z" level=info msg="StopPodSandbox for \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\" returns successfully" Mar 14 00:26:19.416838 containerd[1710]: time="2026-03-14T00:26:19.416808727Z" level=info msg="RemovePodSandbox for \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\"" Mar 14 00:26:19.416886 containerd[1710]: time="2026-03-14T00:26:19.416850627Z" level=info msg="Forcibly stopping sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\"" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.452 [WARNING][6694] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e9f998d1-a481-416b-8656-d361e4f465a0", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"1019c62285cb9a582c7efdd0b80122324dd7779c63c3779aa3ba394c47107a60", Pod:"coredns-674b8bbfcf-pbd56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12523172d58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.452 [INFO][6694] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.452 [INFO][6694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" iface="eth0" netns="" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.452 [INFO][6694] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.452 [INFO][6694] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.473 [INFO][6702] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.473 [INFO][6702] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.473 [INFO][6702] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.480 [WARNING][6702] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.480 [INFO][6702] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" HandleID="k8s-pod-network.59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Workload="ci--4081.3.6--n--2b39e14e44-k8s-coredns--674b8bbfcf--pbd56-eth0" Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.481 [INFO][6702] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.484397 containerd[1710]: 2026-03-14 00:26:19.483 [INFO][6694] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed" Mar 14 00:26:19.485074 containerd[1710]: time="2026-03-14T00:26:19.484424073Z" level=info msg="TearDown network for sandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\" successfully" Mar 14 00:26:19.495079 containerd[1710]: time="2026-03-14T00:26:19.495016636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:19.495216 containerd[1710]: time="2026-03-14T00:26:19.495108938Z" level=info msg="RemovePodSandbox \"59500b6a41d30c58a991f3adebab132f82187efba1af499daad932688958b1ed\" returns successfully" Mar 14 00:26:19.495722 containerd[1710]: time="2026-03-14T00:26:19.495689347Z" level=info msg="StopPodSandbox for \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\"" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.529 [WARNING][6717] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0", GenerateName:"calico-kube-controllers-65cb8d5ccb-", Namespace:"calico-system", SelfLink:"", UID:"9da3f336-22fe-472c-9456-c10265166894", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cb8d5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c", Pod:"calico-kube-controllers-65cb8d5ccb-s59nv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic94868ecc81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.529 [INFO][6717] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.529 [INFO][6717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" iface="eth0" netns="" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.529 [INFO][6717] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.529 [INFO][6717] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.551 [INFO][6724] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.551 [INFO][6724] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.551 [INFO][6724] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.559 [WARNING][6724] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.559 [INFO][6724] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.561 [INFO][6724] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.563746 containerd[1710]: 2026-03-14 00:26:19.562 [INFO][6717] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.565000 containerd[1710]: time="2026-03-14T00:26:19.563791000Z" level=info msg="TearDown network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\" successfully" Mar 14 00:26:19.565000 containerd[1710]: time="2026-03-14T00:26:19.563824401Z" level=info msg="StopPodSandbox for \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\" returns successfully" Mar 14 00:26:19.565000 containerd[1710]: time="2026-03-14T00:26:19.564443810Z" level=info msg="RemovePodSandbox for \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\"" Mar 14 00:26:19.565000 containerd[1710]: time="2026-03-14T00:26:19.564481511Z" level=info msg="Forcibly stopping sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\"" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.599 [WARNING][6738] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0", GenerateName:"calico-kube-controllers-65cb8d5ccb-", Namespace:"calico-system", SelfLink:"", UID:"9da3f336-22fe-472c-9456-c10265166894", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65cb8d5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2b39e14e44", ContainerID:"7ae55f9a9b449ca4a67ebc13a855760577e2d1c8ca1f14456bdf66201caabb0c", Pod:"calico-kube-controllers-65cb8d5ccb-s59nv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic94868ecc81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.599 [INFO][6738] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.599 [INFO][6738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" iface="eth0" netns="" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.599 [INFO][6738] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.599 [INFO][6738] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.621 [INFO][6745] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.621 [INFO][6745] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.621 [INFO][6745] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.627 [WARNING][6745] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.627 [INFO][6745] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" HandleID="k8s-pod-network.e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Workload="ci--4081.3.6--n--2b39e14e44-k8s-calico--kube--controllers--65cb8d5ccb--s59nv-eth0" Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.628 [INFO][6745] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:26:19.631205 containerd[1710]: 2026-03-14 00:26:19.629 [INFO][6738] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953" Mar 14 00:26:19.632122 containerd[1710]: time="2026-03-14T00:26:19.631307144Z" level=info msg="TearDown network for sandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\" successfully" Mar 14 00:26:19.641847 containerd[1710]: time="2026-03-14T00:26:19.641805507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:19.641966 containerd[1710]: time="2026-03-14T00:26:19.641880408Z" level=info msg="RemovePodSandbox \"e9ef56e71bb9f20867ef5251d3ccda2900e7dad05cdc30a148f91328b709e953\" returns successfully" Mar 14 00:26:24.475537 systemd[1]: Started sshd@10-10.200.8.29:22-10.200.16.10:54468.service - OpenSSH per-connection server daemon (10.200.16.10:54468). Mar 14 00:26:25.095277 sshd[6754]: Accepted publickey for core from 10.200.16.10 port 54468 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:25.096434 sshd[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:25.100858 systemd-logind[1694]: New session 13 of user core. Mar 14 00:26:25.105411 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:26:25.251258 systemd[1]: run-containerd-runc-k8s.io-003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44-runc.AvA8Dl.mount: Deactivated successfully. Mar 14 00:26:25.597264 sshd[6754]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:25.600967 systemd[1]: sshd@10-10.200.8.29:22-10.200.16.10:54468.service: Deactivated successfully. Mar 14 00:26:25.603582 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:26:25.605653 systemd-logind[1694]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:26:25.607168 systemd-logind[1694]: Removed session 13. Mar 14 00:26:30.711576 systemd[1]: Started sshd@11-10.200.8.29:22-10.200.16.10:50528.service - OpenSSH per-connection server daemon (10.200.16.10:50528). Mar 14 00:26:31.333281 sshd[6846]: Accepted publickey for core from 10.200.16.10 port 50528 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:31.334037 sshd[6846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:31.337975 systemd-logind[1694]: New session 14 of user core. Mar 14 00:26:31.343714 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:26:31.840213 sshd[6846]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:31.844403 systemd[1]: sshd@11-10.200.8.29:22-10.200.16.10:50528.service: Deactivated successfully. Mar 14 00:26:31.846684 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:26:31.847593 systemd-logind[1694]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:26:31.848722 systemd-logind[1694]: Removed session 14. Mar 14 00:26:36.950545 systemd[1]: Started sshd@12-10.200.8.29:22-10.200.16.10:50540.service - OpenSSH per-connection server daemon (10.200.16.10:50540). Mar 14 00:26:37.568481 sshd[6874]: Accepted publickey for core from 10.200.16.10 port 50540 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:37.570107 sshd[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:37.574876 systemd-logind[1694]: New session 15 of user core. Mar 14 00:26:37.579385 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:26:38.071145 sshd[6874]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:38.075191 systemd[1]: sshd@12-10.200.8.29:22-10.200.16.10:50540.service: Deactivated successfully. Mar 14 00:26:38.077573 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:26:38.078313 systemd-logind[1694]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:26:38.079366 systemd-logind[1694]: Removed session 15. Mar 14 00:26:43.186534 systemd[1]: Started sshd@13-10.200.8.29:22-10.200.16.10:36396.service - OpenSSH per-connection server daemon (10.200.16.10:36396). Mar 14 00:26:43.812515 sshd[6929]: Accepted publickey for core from 10.200.16.10 port 36396 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:43.814038 sshd[6929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:43.818721 systemd-logind[1694]: New session 16 of user core. Mar 14 00:26:43.822397 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:26:44.312922 sshd[6929]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:44.316787 systemd-logind[1694]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:26:44.317758 systemd[1]: sshd@13-10.200.8.29:22-10.200.16.10:36396.service: Deactivated successfully. Mar 14 00:26:44.320794 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:26:44.321848 systemd-logind[1694]: Removed session 16. Mar 14 00:26:44.423483 systemd[1]: Started sshd@14-10.200.8.29:22-10.200.16.10:36412.service - OpenSSH per-connection server daemon (10.200.16.10:36412). Mar 14 00:26:45.053917 sshd[6942]: Accepted publickey for core from 10.200.16.10 port 36412 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:45.055521 sshd[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:45.060363 systemd-logind[1694]: New session 17 of user core. Mar 14 00:26:45.064625 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:26:45.588131 sshd[6942]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:45.592194 systemd[1]: sshd@14-10.200.8.29:22-10.200.16.10:36412.service: Deactivated successfully. Mar 14 00:26:45.594502 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:26:45.595290 systemd-logind[1694]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:26:45.596343 systemd-logind[1694]: Removed session 17. Mar 14 00:26:45.707522 systemd[1]: Started sshd@15-10.200.8.29:22-10.200.16.10:36426.service - OpenSSH per-connection server daemon (10.200.16.10:36426). Mar 14 00:26:46.329911 sshd[6963]: Accepted publickey for core from 10.200.16.10 port 36426 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:46.333314 sshd[6963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:46.340175 systemd-logind[1694]: New session 18 of user core. Mar 14 00:26:46.344394 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:26:46.834627 sshd[6963]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:46.837711 systemd[1]: sshd@15-10.200.8.29:22-10.200.16.10:36426.service: Deactivated successfully. Mar 14 00:26:46.840472 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:26:46.842197 systemd-logind[1694]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:26:46.843565 systemd-logind[1694]: Removed session 18. Mar 14 00:26:51.953549 systemd[1]: Started sshd@16-10.200.8.29:22-10.200.16.10:40358.service - OpenSSH per-connection server daemon (10.200.16.10:40358). Mar 14 00:26:52.571981 sshd[7008]: Accepted publickey for core from 10.200.16.10 port 40358 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:52.573648 sshd[7008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:52.578213 systemd-logind[1694]: New session 19 of user core. Mar 14 00:26:52.582408 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:26:53.081868 sshd[7008]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:53.085839 systemd[1]: sshd@16-10.200.8.29:22-10.200.16.10:40358.service: Deactivated successfully. Mar 14 00:26:53.088064 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:26:53.088838 systemd-logind[1694]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:26:53.090332 systemd-logind[1694]: Removed session 19. Mar 14 00:26:53.199548 systemd[1]: Started sshd@17-10.200.8.29:22-10.200.16.10:40368.service - OpenSSH per-connection server daemon (10.200.16.10:40368). Mar 14 00:26:53.819265 sshd[7020]: Accepted publickey for core from 10.200.16.10 port 40368 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:53.820417 sshd[7020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:53.825335 systemd-logind[1694]: New session 20 of user core. Mar 14 00:26:53.829660 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:26:54.382339 sshd[7020]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:54.386387 systemd-logind[1694]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:26:54.387130 systemd[1]: sshd@17-10.200.8.29:22-10.200.16.10:40368.service: Deactivated successfully. Mar 14 00:26:54.389894 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:26:54.390989 systemd-logind[1694]: Removed session 20. Mar 14 00:26:54.498562 systemd[1]: Started sshd@18-10.200.8.29:22-10.200.16.10:40382.service - OpenSSH per-connection server daemon (10.200.16.10:40382). Mar 14 00:26:55.121100 sshd[7033]: Accepted publickey for core from 10.200.16.10 port 40382 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:55.122941 sshd[7033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:55.128180 systemd-logind[1694]: New session 21 of user core. Mar 14 00:26:55.135679 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:26:56.149648 sshd[7033]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:56.154195 systemd[1]: sshd@18-10.200.8.29:22-10.200.16.10:40382.service: Deactivated successfully. Mar 14 00:26:56.156528 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:26:56.157431 systemd-logind[1694]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:26:56.158596 systemd-logind[1694]: Removed session 21. Mar 14 00:26:56.263691 systemd[1]: Started sshd@19-10.200.8.29:22-10.200.16.10:40394.service - OpenSSH per-connection server daemon (10.200.16.10:40394). Mar 14 00:26:56.745135 systemd[1]: run-containerd-runc-k8s.io-003864a81fda186a7d096dbf674037a5eee33ba06dfc7ca6dca7f7da69f9ac44-runc.hcgmFz.mount: Deactivated successfully. Mar 14 00:26:56.882467 sshd[7059]: Accepted publickey for core from 10.200.16.10 port 40394 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:56.884271 sshd[7059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:56.889011 systemd-logind[1694]: New session 22 of user core. Mar 14 00:26:56.894378 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:26:57.496671 sshd[7059]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:57.499980 systemd[1]: sshd@19-10.200.8.29:22-10.200.16.10:40394.service: Deactivated successfully. Mar 14 00:26:57.502561 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:26:57.504176 systemd-logind[1694]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:26:57.505496 systemd-logind[1694]: Removed session 22. Mar 14 00:26:57.611563 systemd[1]: Started sshd@20-10.200.8.29:22-10.200.16.10:40410.service - OpenSSH per-connection server daemon (10.200.16.10:40410). Mar 14 00:26:58.230539 sshd[7090]: Accepted publickey for core from 10.200.16.10 port 40410 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:26:58.231891 sshd[7090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:58.236816 systemd-logind[1694]: New session 23 of user core. Mar 14 00:26:58.242393 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:26:58.727432 sshd[7090]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:58.730369 systemd[1]: sshd@20-10.200.8.29:22-10.200.16.10:40410.service: Deactivated successfully. Mar 14 00:26:58.732999 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:26:58.734921 systemd-logind[1694]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:26:58.736266 systemd-logind[1694]: Removed session 23. Mar 14 00:27:03.844529 systemd[1]: Started sshd@21-10.200.8.29:22-10.200.16.10:60992.service - OpenSSH per-connection server daemon (10.200.16.10:60992). Mar 14 00:27:04.466575 sshd[7124]: Accepted publickey for core from 10.200.16.10 port 60992 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:27:04.468138 sshd[7124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:27:04.473113 systemd-logind[1694]: New session 24 of user core. Mar 14 00:27:04.479403 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:27:04.968449 sshd[7124]: pam_unix(sshd:session): session closed for user core Mar 14 00:27:04.973937 systemd-logind[1694]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:27:04.974780 systemd[1]: sshd@21-10.200.8.29:22-10.200.16.10:60992.service: Deactivated successfully. Mar 14 00:27:04.976857 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:27:04.978144 systemd-logind[1694]: Removed session 24. Mar 14 00:27:10.085608 systemd[1]: Started sshd@22-10.200.8.29:22-10.200.16.10:51706.service - OpenSSH per-connection server daemon (10.200.16.10:51706). Mar 14 00:27:10.707592 sshd[7136]: Accepted publickey for core from 10.200.16.10 port 51706 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:27:10.709179 sshd[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:27:10.718593 systemd-logind[1694]: New session 25 of user core. Mar 14 00:27:10.724413 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:27:11.208178 sshd[7136]: pam_unix(sshd:session): session closed for user core Mar 14 00:27:11.212277 systemd[1]: sshd@22-10.200.8.29:22-10.200.16.10:51706.service: Deactivated successfully. Mar 14 00:27:11.214875 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:27:11.215685 systemd-logind[1694]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:27:11.216749 systemd-logind[1694]: Removed session 25. Mar 14 00:27:16.323588 systemd[1]: Started sshd@23-10.200.8.29:22-10.200.16.10:51718.service - OpenSSH per-connection server daemon (10.200.16.10:51718). Mar 14 00:27:16.946869 sshd[7171]: Accepted publickey for core from 10.200.16.10 port 51718 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:27:16.948475 sshd[7171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:27:16.953437 systemd-logind[1694]: New session 26 of user core. Mar 14 00:27:16.957372 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:27:17.444157 sshd[7171]: pam_unix(sshd:session): session closed for user core Mar 14 00:27:17.447148 systemd[1]: sshd@23-10.200.8.29:22-10.200.16.10:51718.service: Deactivated successfully. Mar 14 00:27:17.450008 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:27:17.451562 systemd-logind[1694]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:27:17.452778 systemd-logind[1694]: Removed session 26. Mar 14 00:27:22.559540 systemd[1]: Started sshd@24-10.200.8.29:22-10.200.16.10:36036.service - OpenSSH per-connection server daemon (10.200.16.10:36036). Mar 14 00:27:23.176031 sshd[7186]: Accepted publickey for core from 10.200.16.10 port 36036 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:27:23.177696 sshd[7186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:27:23.184074 systemd-logind[1694]: New session 27 of user core. Mar 14 00:27:23.192375 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:27:23.678479 sshd[7186]: pam_unix(sshd:session): session closed for user core Mar 14 00:27:23.681817 systemd[1]: sshd@24-10.200.8.29:22-10.200.16.10:36036.service: Deactivated successfully. Mar 14 00:27:23.684773 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:27:23.686291 systemd-logind[1694]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:27:23.687445 systemd-logind[1694]: Removed session 27. Mar 14 00:27:28.787440 systemd[1]: Started sshd@25-10.200.8.29:22-10.200.16.10:36044.service - OpenSSH per-connection server daemon (10.200.16.10:36044). Mar 14 00:27:29.418252 sshd[7261]: Accepted publickey for core from 10.200.16.10 port 36044 ssh2: RSA SHA256:CNjwgcy7HwECUN0y37L8fFJ04E3kshXG3Tp6Lal6M4M Mar 14 00:27:29.419312 sshd[7261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:27:29.426155 systemd-logind[1694]: New session 28 of user core. Mar 14 00:27:29.432627 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 14 00:27:29.754541 systemd[1]: run-containerd-runc-k8s.io-0f4b412afc0cd346fcc5306f0961ebab41be6def18810a906e69bd5359f4e12f-runc.LeAFz3.mount: Deactivated successfully. Mar 14 00:27:29.934833 sshd[7261]: pam_unix(sshd:session): session closed for user core Mar 14 00:27:29.937775 systemd[1]: sshd@25-10.200.8.29:22-10.200.16.10:36044.service: Deactivated successfully. Mar 14 00:27:29.940072 systemd[1]: session-28.scope: Deactivated successfully. Mar 14 00:27:29.941686 systemd-logind[1694]: Session 28 logged out. Waiting for processes to exit. Mar 14 00:27:29.943039 systemd-logind[1694]: Removed session 28.