Mar 7 01:15:11.133082 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:15:11.133127 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:11.133147 kernel: BIOS-provided physical RAM map: Mar 7 01:15:11.133157 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:15:11.133167 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 7 01:15:11.133175 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 7 01:15:11.133188 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 7 01:15:11.133199 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 7 01:15:11.133215 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 7 01:15:11.133227 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 7 01:15:11.133239 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 7 01:15:11.133253 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 7 01:15:11.133264 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 7 01:15:11.133276 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 7 01:15:11.133294 kernel: printk: bootconsole [earlyser0] enabled Mar 7 01:15:11.133307 kernel: NX (Execute Disable) protection: active Mar 7 01:15:11.133324 kernel: APIC: Static calls initialized Mar 7 01:15:11.133340 kernel: efi: EFI v2.7 by Microsoft Mar 7 01:15:11.133352 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 7 01:15:11.133363 kernel: SMBIOS 3.1.0 present. Mar 7 01:15:11.133376 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 01:15:11.133390 kernel: Hypervisor detected: Microsoft Hyper-V Mar 7 01:15:11.133403 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 7 01:15:11.133416 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 01:15:11.133429 kernel: Hyper-V: Nested features: 0x1e0101 Mar 7 01:15:11.133446 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 7 01:15:11.133459 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 7 01:15:11.133473 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:15:11.133486 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:15:11.133501 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 7 01:15:11.133515 kernel: tsc: Detected 2593.906 MHz processor Mar 7 01:15:11.133528 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:15:11.133542 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:15:11.133555 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 7 01:15:11.133570 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:15:11.133583 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:15:11.133596 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 7 01:15:11.133610 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 7 01:15:11.133623 kernel: Using GB pages for direct mapping Mar 7 01:15:11.133636 kernel: Secure boot disabled Mar 7 01:15:11.133655 kernel: ACPI: Early table checksum verification disabled Mar 7 01:15:11.133671 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 7 01:15:11.133682 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133696 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133710 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:15:11.133722 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 7 01:15:11.133737 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133750 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133767 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133780 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133794 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133807 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133820 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 7 01:15:11.133833 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 7 01:15:11.133847 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 7 01:15:11.133862 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 7 01:15:11.133876 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 7 01:15:11.133893 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 7 01:15:11.133907 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 7 01:15:11.133922 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 7 01:15:11.133937 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 7 01:15:11.133951 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:15:11.133965 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:15:11.133979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 01:15:11.133993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 7 01:15:11.134007 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 7 01:15:11.134025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 01:15:11.134040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 01:15:11.134053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 01:15:11.134068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 01:15:11.134082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 01:15:11.134096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 01:15:11.134110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 01:15:11.139785 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 7 01:15:11.139800 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 7 01:15:11.139809 kernel: Zone ranges: Mar 7 01:15:11.139821 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:15:11.139829 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:15:11.139836 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:15:11.139848 kernel: Movable zone start for each node Mar 7 01:15:11.139856 kernel: Early memory node ranges Mar 7 01:15:11.139863 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:15:11.139873 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 7 01:15:11.139885 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 7 01:15:11.139892 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 7 01:15:11.139903 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:15:11.139912 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 7 01:15:11.139919 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:15:11.139927 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:15:11.139938 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 7 01:15:11.139946 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 7 01:15:11.139953 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 7 01:15:11.139965 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 7 01:15:11.139975 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:15:11.139982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:15:11.139990 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:15:11.140001 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 7 01:15:11.140009 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:15:11.140016 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 7 01:15:11.140028 kernel: Booting paravirtualized kernel on Hyper-V Mar 7 01:15:11.140036 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:15:11.140046 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:15:11.140058 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:15:11.140066 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:15:11.140077 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:15:11.140084 kernel: Hyper-V: PV spinlocks enabled Mar 7 01:15:11.140092 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:15:11.140100 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:11.140118 kernel: random: crng init done Mar 7 01:15:11.140128 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:15:11.140139 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:15:11.140148 kernel: Fallback order for Node 0: 0 Mar 7 01:15:11.140155 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 7 01:15:11.140165 kernel: Policy zone: Normal Mar 7 01:15:11.140174 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:15:11.140182 kernel: software IO TLB: area num 2. Mar 7 01:15:11.140190 kernel: Memory: 8066048K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316920K reserved, 0K cma-reserved) Mar 7 01:15:11.140202 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:15:11.140218 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:15:11.140226 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:15:11.140238 kernel: Dynamic Preempt: voluntary Mar 7 01:15:11.140249 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:15:11.140257 kernel: rcu: RCU event tracing is enabled. Mar 7 01:15:11.140270 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:15:11.140278 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:15:11.140286 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:15:11.140297 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:15:11.140309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:15:11.140317 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:15:11.140329 kernel: Using NULL legacy PIC Mar 7 01:15:11.140337 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 7 01:15:11.140345 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:15:11.140357 kernel: Console: colour dummy device 80x25 Mar 7 01:15:11.140365 kernel: printk: console [tty1] enabled Mar 7 01:15:11.140373 kernel: printk: console [ttyS0] enabled Mar 7 01:15:11.140387 kernel: printk: bootconsole [earlyser0] disabled Mar 7 01:15:11.140395 kernel: ACPI: Core revision 20230628 Mar 7 01:15:11.140403 kernel: Failed to register legacy timer interrupt Mar 7 01:15:11.140415 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:15:11.140423 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:15:11.140433 kernel: Hyper-V: Using IPI hypercalls Mar 7 01:15:11.140443 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 7 01:15:11.140451 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 7 01:15:11.140459 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 7 01:15:11.140473 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 7 01:15:11.140481 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 7 01:15:11.140492 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 7 01:15:11.140501 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Mar 7 01:15:11.140509 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:15:11.140519 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:15:11.140529 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:15:11.140537 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:15:11.140547 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:15:11.140557 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:15:11.140570 kernel: RETBleed: Vulnerable Mar 7 01:15:11.140580 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:15:11.140587 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:15:11.140599 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:15:11.140607 kernel: active return thunk: its_return_thunk Mar 7 01:15:11.140615 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:15:11.140626 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:15:11.140635 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:15:11.140643 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:15:11.140651 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:15:11.140665 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:15:11.140673 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:15:11.140681 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:15:11.140693 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 7 01:15:11.140701 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 7 01:15:11.140709 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 7 01:15:11.140721 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 7 01:15:11.140729 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:15:11.140737 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:15:11.140749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:15:11.140757 kernel: landlock: Up and running. Mar 7 01:15:11.140764 kernel: SELinux: Initializing. Mar 7 01:15:11.140778 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.140786 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.140795 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:15:11.140806 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:11.140814 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:11.140822 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:11.140835 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:15:11.140843 kernel: signal: max sigframe size: 3632 Mar 7 01:15:11.140853 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:15:11.140865 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:15:11.140873 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:15:11.140883 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:15:11.140893 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:15:11.140901 kernel: .... node #0, CPUs: #1 Mar 7 01:15:11.140913 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 7 01:15:11.140922 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:15:11.140930 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:15:11.140941 kernel: smpboot: Max logical packages: 1 Mar 7 01:15:11.140953 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 7 01:15:11.140961 kernel: devtmpfs: initialized Mar 7 01:15:11.140973 kernel: x86/mm: Memory block size: 128MB Mar 7 01:15:11.140981 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 7 01:15:11.140989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:15:11.141001 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:15:11.141009 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:15:11.141017 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:15:11.141029 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:15:11.141040 kernel: audit: type=2000 audit(1772846109.029:1): state=initialized audit_enabled=0 res=1 Mar 7 01:15:11.141052 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:15:11.141060 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:15:11.141068 kernel: cpuidle: using governor menu Mar 7 01:15:11.141080 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:15:11.141088 kernel: dca service started, version 1.12.1 Mar 7 01:15:11.141096 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 7 01:15:11.141108 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 7 01:15:11.141121 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:15:11.141132 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:15:11.141143 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:15:11.141152 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:15:11.141162 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:15:11.141172 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:15:11.141180 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:15:11.141187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:15:11.141200 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:15:11.141210 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:15:11.141218 kernel: ACPI: Interpreter enabled Mar 7 01:15:11.141230 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:15:11.141238 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:15:11.141246 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:15:11.141258 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:15:11.141266 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 7 01:15:11.141274 kernel: iommu: Default domain type: Translated Mar 7 01:15:11.141286 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:15:11.141294 kernel: efivars: Registered efivars operations Mar 7 01:15:11.141305 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:15:11.141317 kernel: PCI: System does not support PCI Mar 7 01:15:11.141325 kernel: vgaarb: loaded Mar 7 01:15:11.141333 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 7 01:15:11.141347 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:15:11.141355 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:15:11.141366 kernel: pnp: PnP ACPI init Mar 7 01:15:11.141375 kernel: pnp: PnP ACPI: found 3 devices Mar 7 01:15:11.141383 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:15:11.141397 kernel: NET: Registered PF_INET protocol family Mar 7 01:15:11.141406 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:15:11.141414 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:15:11.141426 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:15:11.141434 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:15:11.141442 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:15:11.141454 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:15:11.141462 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.141470 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.141485 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:15:11.141493 kernel: NET: Registered PF_XDP protocol family Mar 7 01:15:11.141501 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:15:11.141513 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:15:11.141521 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 7 01:15:11.141533 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:15:11.141541 kernel: Initialise system trusted keyrings Mar 7 01:15:11.141549 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:15:11.141563 kernel: Key type asymmetric registered Mar 7 01:15:11.141571 kernel: Asymmetric key parser 'x509' registered Mar 7 01:15:11.141582 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:15:11.141591 kernel: io scheduler mq-deadline registered Mar 7 01:15:11.141599 kernel: io scheduler kyber registered Mar 7 01:15:11.141607 kernel: io scheduler bfq registered Mar 7 01:15:11.141619 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:15:11.141627 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:15:11.141637 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:15:11.141647 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:15:11.141657 kernel: i8042: PNP: No PS/2 controller found. Mar 7 01:15:11.141836 kernel: rtc_cmos 00:02: registered as rtc0 Mar 7 01:15:11.141999 kernel: rtc_cmos 00:02: setting system clock to 2026-03-07T01:15:10 UTC (1772846110) Mar 7 01:15:11.144199 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 7 01:15:11.144226 kernel: intel_pstate: CPU model not supported Mar 7 01:15:11.144242 kernel: efifb: probing for efifb Mar 7 01:15:11.144258 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:15:11.144278 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:15:11.144293 kernel: efifb: scrolling: redraw Mar 7 01:15:11.144309 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:15:11.144324 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:15:11.144340 kernel: fb0: EFI VGA frame buffer device Mar 7 01:15:11.144355 kernel: pstore: Using crash dump compression: deflate Mar 7 01:15:11.144371 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:15:11.144386 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:15:11.144401 kernel: Segment Routing with IPv6 Mar 7 01:15:11.144419 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:15:11.144435 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:15:11.144450 kernel: Key type dns_resolver registered Mar 7 01:15:11.144465 kernel: IPI shorthand broadcast: enabled Mar 7 01:15:11.144481 kernel: sched_clock: Marking stable (915002600, 57650500)->(1199977400, -227324300) Mar 7 01:15:11.144496 kernel: registered taskstats version 1 Mar 7 01:15:11.144511 kernel: Loading compiled-in X.509 certificates Mar 7 01:15:11.144527 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:15:11.144542 kernel: Key type .fscrypt registered Mar 7 01:15:11.144559 kernel: Key type fscrypt-provisioning registered Mar 7 01:15:11.144575 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:15:11.144590 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:15:11.144605 kernel: ima: No architecture policies found Mar 7 01:15:11.144620 kernel: clk: Disabling unused clocks Mar 7 01:15:11.144635 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:15:11.144651 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:15:11.144666 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:15:11.144682 kernel: Run /init as init process Mar 7 01:15:11.144700 kernel: with arguments: Mar 7 01:15:11.144715 kernel: /init Mar 7 01:15:11.144730 kernel: with environment: Mar 7 01:15:11.144744 kernel: HOME=/ Mar 7 01:15:11.144759 kernel: TERM=linux Mar 7 01:15:11.144777 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:15:11.144796 systemd[1]: Detected virtualization microsoft. Mar 7 01:15:11.144813 systemd[1]: Detected architecture x86-64. Mar 7 01:15:11.144831 systemd[1]: Running in initrd. Mar 7 01:15:11.144847 systemd[1]: No hostname configured, using default hostname. Mar 7 01:15:11.144862 systemd[1]: Hostname set to . Mar 7 01:15:11.144879 systemd[1]: Initializing machine ID from random generator. Mar 7 01:15:11.144895 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:15:11.144911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:11.144926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:11.144944 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:15:11.144962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:15:11.144979 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:15:11.144995 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:15:11.145014 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:15:11.145030 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:15:11.145047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:11.145064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:11.145085 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:15:11.145102 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:15:11.145135 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:15:11.145151 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:15:11.145167 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:15:11.145183 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:15:11.145200 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:15:11.145216 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:15:11.145232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:11.145253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:11.145269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:11.145285 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:15:11.145302 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:15:11.145318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:15:11.145335 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:15:11.145351 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:15:11.145367 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:15:11.145386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:15:11.145422 systemd-journald[177]: Collecting audit messages is disabled. Mar 7 01:15:11.145458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:11.145474 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:15:11.145493 systemd-journald[177]: Journal started Mar 7 01:15:11.145527 systemd-journald[177]: Runtime Journal (/run/log/journal/249b112e0b3140358a828c40b0e1e84b) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:15:11.154822 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:15:11.146350 systemd-modules-load[178]: Inserted module 'overlay' Mar 7 01:15:11.157618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:11.158196 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:15:11.168836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:11.190344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:11.213254 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:15:11.213298 kernel: Bridge firewalling registered Mar 7 01:15:11.197261 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:15:11.218042 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 7 01:15:11.226287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:15:11.235602 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:11.244299 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:11.252477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:15:11.258490 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:11.272262 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:15:11.281302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:15:11.289351 dracut-cmdline[204]: dracut-dracut-053 Mar 7 01:15:11.293657 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:11.314619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:15:11.332410 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:11.342773 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:11.354584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:15:11.394752 systemd-resolved[265]: Positive Trust Anchors: Mar 7 01:15:11.394768 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:15:11.394821 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:15:11.435791 kernel: SCSI subsystem initialized Mar 7 01:15:11.426165 systemd-resolved[265]: Defaulting to hostname 'linux'. Mar 7 01:15:11.427308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:15:11.431138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:11.447206 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:15:11.459134 kernel: iscsi: registered transport (tcp) Mar 7 01:15:11.483475 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:15:11.483533 kernel: QLogic iSCSI HBA Driver Mar 7 01:15:11.520607 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:15:11.531290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:15:11.562393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:15:11.562480 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:15:11.566243 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:15:11.606136 kernel: raid6: avx512x4 gen() 18424 MB/s Mar 7 01:15:11.626129 kernel: raid6: avx512x2 gen() 18316 MB/s Mar 7 01:15:11.645123 kernel: raid6: avx512x1 gen() 18243 MB/s Mar 7 01:15:11.664126 kernel: raid6: avx2x4 gen() 18035 MB/s Mar 7 01:15:11.684130 kernel: raid6: avx2x2 gen() 18101 MB/s Mar 7 01:15:11.704728 kernel: raid6: avx2x1 gen() 13826 MB/s Mar 7 01:15:11.704758 kernel: raid6: using algorithm avx512x4 gen() 18424 MB/s Mar 7 01:15:11.726754 kernel: raid6: .... xor() 7413 MB/s, rmw enabled Mar 7 01:15:11.726780 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:15:11.750136 kernel: xor: automatically using best checksumming function avx Mar 7 01:15:11.898149 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:15:11.907921 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:15:11.919288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:11.938463 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:15:11.943054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:11.959306 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:15:11.972386 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Mar 7 01:15:11.998618 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:15:12.014319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:15:12.059684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:12.072286 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:15:12.091374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:15:12.099381 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:15:12.103040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:12.113643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:15:12.124709 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:15:12.150262 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:15:12.162215 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:15:12.179188 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:15:12.185130 kernel: AES CTR mode by8 optimization enabled Mar 7 01:15:12.186850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:15:12.187011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:12.191124 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:12.194752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:12.194925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:12.198796 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:12.220458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:12.241241 kernel: hv_vmbus: Vmbus version:5.2 Mar 7 01:15:12.232680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:12.232808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:12.254305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:12.282409 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:15:12.282455 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:15:12.291135 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:15:12.296131 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:15:12.303982 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 7 01:15:12.304016 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:15:12.313257 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:15:12.313428 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:15:12.313443 kernel: scsi host0: storvsc_host_t Mar 7 01:15:12.317670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:12.331072 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 7 01:15:12.341139 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:15:12.341202 kernel: scsi host1: storvsc_host_t Mar 7 01:15:12.341232 kernel: PTP clock support registered Mar 7 01:15:12.345229 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:15:12.347219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:12.358042 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:15:12.385831 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:15:12.385885 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:15:12.386930 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:12.400661 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:15:12.400696 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:15:12.400711 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:15:12.548649 systemd-resolved[265]: Clock change detected. Flushing caches. Mar 7 01:15:12.559776 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:15:12.560018 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:15:12.562171 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:15:12.578624 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:15:12.578941 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:15:12.583299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:15:12.585851 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:15:12.592173 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:15:12.592736 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:15:12.599175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:15:12.603061 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:15:12.620228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:15:12.670314 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: VF slot 1 added Mar 7 01:15:12.678168 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:15:12.683917 kernel: hv_pci fb9eb66e-f475-4ab4-931f-01759501c793: PCI VMBus probing: Using version 0x10004 Mar 7 01:15:12.684127 kernel: hv_pci fb9eb66e-f475-4ab4-931f-01759501c793: PCI host bridge to bus f475:00 Mar 7 01:15:12.690730 kernel: pci_bus f475:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 7 01:15:12.697175 kernel: pci_bus f475:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:15:12.709175 kernel: pci f475:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 7 01:15:12.714158 kernel: pci f475:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:15:12.720291 kernel: pci f475:00:02.0: enabling Extended Tags Mar 7 01:15:12.733253 kernel: pci f475:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f475:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 7 01:15:12.739172 kernel: pci_bus f475:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:15:12.739361 kernel: pci f475:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:15:12.756182 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (455) Mar 7 01:15:12.770181 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (442) Mar 7 01:15:12.796243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:15:12.815299 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:15:12.832760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:15:12.848279 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:15:12.859288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:15:12.879338 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:15:12.900182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:15:13.080434 kernel: mlx5_core f475:00:02.0: enabling device (0000 -> 0002) Mar 7 01:15:13.087195 kernel: mlx5_core f475:00:02.0: firmware version: 14.30.5026 Mar 7 01:15:13.303633 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: VF registering: eth1 Mar 7 01:15:13.304013 kernel: mlx5_core f475:00:02.0 eth1: joined to eth0 Mar 7 01:15:13.308339 kernel: mlx5_core f475:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 7 01:15:13.319170 kernel: mlx5_core f475:00:02.0 enP62581s1: renamed from eth1 Mar 7 01:15:13.923175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:15:13.924201 disk-uuid[596]: The operation has completed successfully. Mar 7 01:15:14.009353 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:15:14.009467 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:15:14.036292 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:15:14.045429 sh[719]: Success Mar 7 01:15:14.068058 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:15:14.156112 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:15:14.177279 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:15:14.178767 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:15:14.206503 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:15:14.206562 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:14.210800 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:15:14.214114 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:15:14.216932 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:15:14.279422 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:15:14.285273 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:15:14.301545 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:15:14.307544 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:15:14.329170 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:14.329213 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:14.334939 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:15:14.350212 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:15:14.360842 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:15:14.367231 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:14.374391 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:15:14.383313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:15:14.420196 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:15:14.432401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:15:14.459108 systemd-networkd[903]: lo: Link UP Mar 7 01:15:14.459119 systemd-networkd[903]: lo: Gained carrier Mar 7 01:15:14.461310 systemd-networkd[903]: Enumeration completed Mar 7 01:15:14.461591 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:15:14.465243 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:14.465247 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:15:14.465645 systemd[1]: Reached target network.target - Network. Mar 7 01:15:14.543204 kernel: mlx5_core f475:00:02.0 enP62581s1: Link up Mar 7 01:15:14.577073 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: Data path switched to VF: enP62581s1 Mar 7 01:15:14.576032 systemd-networkd[903]: enP62581s1: Link UP Mar 7 01:15:14.576185 systemd-networkd[903]: eth0: Link UP Mar 7 01:15:14.576390 systemd-networkd[903]: eth0: Gained carrier Mar 7 01:15:14.576403 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:14.582369 systemd-networkd[903]: enP62581s1: Gained carrier Mar 7 01:15:14.624215 systemd-networkd[903]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:15:14.640645 ignition[854]: Ignition 2.19.0 Mar 7 01:15:14.640658 ignition[854]: Stage: fetch-offline Mar 7 01:15:14.640696 ignition[854]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.640707 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.640816 ignition[854]: parsed url from cmdline: "" Mar 7 01:15:14.640822 ignition[854]: no config URL provided Mar 7 01:15:14.640828 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:15:14.640839 ignition[854]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:15:14.652788 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:15:14.640846 ignition[854]: failed to fetch config: resource requires networking Mar 7 01:15:14.641082 ignition[854]: Ignition finished successfully Mar 7 01:15:14.679335 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:15:14.697006 ignition[912]: Ignition 2.19.0 Mar 7 01:15:14.697019 ignition[912]: Stage: fetch Mar 7 01:15:14.697246 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.697261 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.697373 ignition[912]: parsed url from cmdline: "" Mar 7 01:15:14.697376 ignition[912]: no config URL provided Mar 7 01:15:14.697384 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:15:14.697393 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:15:14.697411 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:15:14.839421 ignition[912]: GET result: OK Mar 7 01:15:14.839524 ignition[912]: config has been read from IMDS userdata Mar 7 01:15:14.839557 ignition[912]: parsing config with SHA512: 8b9d1b49d3ca2472c4c839b54e5137565b04fb6df2201db172f12993a2899c1e918482c1a0f170de161e0155295a330a7466ce8f31cb0289e622eec804deb628 Mar 7 01:15:14.847268 unknown[912]: fetched base config from "system" Mar 7 01:15:14.847283 unknown[912]: fetched base config from "system" Mar 7 01:15:14.847296 unknown[912]: fetched user config from "azure" Mar 7 01:15:14.855446 ignition[912]: fetch: fetch complete Mar 7 01:15:14.855456 ignition[912]: fetch: fetch passed Mar 7 01:15:14.855533 ignition[912]: Ignition finished successfully Mar 7 01:15:14.862328 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:15:14.871313 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:15:14.887904 ignition[918]: Ignition 2.19.0 Mar 7 01:15:14.887917 ignition[918]: Stage: kargs Mar 7 01:15:14.888131 ignition[918]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.888156 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.889512 ignition[918]: kargs: kargs passed Mar 7 01:15:14.889560 ignition[918]: Ignition finished successfully Mar 7 01:15:14.899457 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:15:14.913378 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:15:14.928884 ignition[924]: Ignition 2.19.0 Mar 7 01:15:14.928896 ignition[924]: Stage: disks Mar 7 01:15:14.929113 ignition[924]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.929131 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.930088 ignition[924]: disks: disks passed Mar 7 01:15:14.936155 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:15:14.930131 ignition[924]: Ignition finished successfully Mar 7 01:15:14.947392 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:15:14.953342 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:15:14.956924 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:15:14.963318 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:15:14.970884 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:15:14.980310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:15:15.007532 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:15:15.014143 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:15:15.029123 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:15:15.129169 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:15:15.129338 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:15:15.134694 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:15:15.155414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:15:15.162731 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:15:15.170317 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:15:15.171522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:15:15.171551 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:15:15.179652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:15:15.212524 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 7 01:15:15.212556 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:15.212569 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:15.212590 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:15:15.201444 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:15:15.224926 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:15:15.226291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:15:15.360980 coreos-metadata[945]: Mar 07 01:15:15.360 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:15:15.367378 coreos-metadata[945]: Mar 07 01:15:15.367 INFO Fetch successful Mar 7 01:15:15.367378 coreos-metadata[945]: Mar 07 01:15:15.367 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:15:15.377341 coreos-metadata[945]: Mar 07 01:15:15.377 INFO Fetch successful Mar 7 01:15:15.389167 coreos-metadata[945]: Mar 07 01:15:15.386 INFO wrote hostname ci-4081.3.6-n-baf9cf72b8 to /sysroot/etc/hostname Mar 7 01:15:15.394449 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:15:15.397967 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:15:15.411515 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:15:15.422258 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:15:15.429941 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:15:15.696291 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:15:15.709365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:15:15.715311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:15:15.728349 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:15:15.735700 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:15.757582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:15:15.774655 ignition[1064]: INFO : Ignition 2.19.0 Mar 7 01:15:15.774655 ignition[1064]: INFO : Stage: mount Mar 7 01:15:15.779435 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:15.779435 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:15.779435 ignition[1064]: INFO : mount: mount passed Mar 7 01:15:15.779435 ignition[1064]: INFO : Ignition finished successfully Mar 7 01:15:15.786085 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:15:15.792541 systemd-networkd[903]: eth0: Gained IPv6LL Mar 7 01:15:15.799459 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:15:15.809989 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:15:15.834224 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Mar 7 01:15:15.834303 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:15.837985 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:15.841559 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:15:15.885164 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:15:15.886437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:15:15.913333 ignition[1089]: INFO : Ignition 2.19.0 Mar 7 01:15:15.913333 ignition[1089]: INFO : Stage: files Mar 7 01:15:15.918421 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:15.918421 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:15.918421 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:15:15.929065 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:15:15.929065 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:15:15.944480 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:15:15.948881 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:15:15.953029 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:15:15.952345 unknown[1089]: wrote ssh authorized keys file for user: core Mar 7 01:15:15.960029 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:15:15.966407 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:15:16.000445 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:15:16.101529 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:15:16.108023 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:15:16.108023 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:15:16.108023 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:15:16.123966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:15:16.123966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:15:16.134618 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:15:16.139971 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:15:16.145236 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:15:16.150914 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:15:16.447081 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:15:16.802168 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.802168 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:15:16.812469 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:15:16.818173 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: files passed Mar 7 01:15:16.822989 ignition[1089]: INFO : Ignition finished successfully Mar 7 01:15:16.828130 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:15:16.851728 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:15:16.870373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:15:16.878026 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:15:16.881039 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:15:16.895020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:16.895020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:16.904365 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:16.910345 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:15:16.914730 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:15:16.931571 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:15:16.959003 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:15:16.959111 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:15:16.965863 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:15:16.972486 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:15:16.980832 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:15:16.992640 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:15:17.008524 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:15:17.021286 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:15:17.034310 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:17.035739 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:17.036334 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:15:17.036814 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:15:17.036946 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:15:17.037861 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:15:17.038391 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:15:17.038899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:15:17.039392 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:15:17.039905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:15:17.040978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:15:17.041499 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:15:17.042107 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:15:17.042613 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:15:17.043119 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:15:17.043592 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:15:17.043725 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:15:17.044639 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:17.045297 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:17.045739 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:15:17.089242 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:17.097299 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:15:17.101903 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:15:17.166532 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:15:17.166736 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:15:17.177802 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:15:17.177997 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:15:17.183735 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:15:17.183896 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:15:17.198453 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:15:17.215107 ignition[1141]: INFO : Ignition 2.19.0 Mar 7 01:15:17.215107 ignition[1141]: INFO : Stage: umount Mar 7 01:15:17.222902 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:17.222902 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:17.222902 ignition[1141]: INFO : umount: umount passed Mar 7 01:15:17.222902 ignition[1141]: INFO : Ignition finished successfully Mar 7 01:15:17.215463 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:15:17.222634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:15:17.222804 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:17.234597 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:15:17.234704 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:15:17.238943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:15:17.240084 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:15:17.240196 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:15:17.249331 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:15:17.249583 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:15:17.270069 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:15:17.270132 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:15:17.277793 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:15:17.277838 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:15:17.278726 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:15:17.278760 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:15:17.279933 systemd[1]: Stopped target network.target - Network. Mar 7 01:15:17.280442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:15:17.280483 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:15:17.280964 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:15:17.281456 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:15:17.328236 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:17.334123 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:15:17.339236 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:15:17.342096 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:15:17.342172 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:15:17.347738 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:15:17.347776 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:15:17.350685 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:15:17.350744 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:15:17.357668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:15:17.359919 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:15:17.385499 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:15:17.391041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:15:17.397201 systemd-networkd[903]: eth0: DHCPv6 lease lost Mar 7 01:15:17.399790 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:15:17.399917 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:15:17.404729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:15:17.404822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:17.427610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:15:17.430747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:15:17.430819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:15:17.435068 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:17.436633 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:15:17.436884 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:15:17.461679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:15:17.461743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:17.464766 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:15:17.464820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:17.467400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:15:17.467447 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:17.483468 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:15:17.483604 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:17.492855 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:15:17.492938 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:17.498197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:15:17.498228 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:17.501293 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:15:17.501345 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:15:17.514958 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:15:17.515009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:15:17.539313 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: Data path switched from VF: enP62581s1 Mar 7 01:15:17.537970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:15:17.538023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:17.553328 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:15:17.560064 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:15:17.560134 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:17.566032 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:15:17.566095 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:15:17.580335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:15:17.580374 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:17.585126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:17.585179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:17.599138 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:15:17.602135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:15:17.607642 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:15:17.607760 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:15:17.779605 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:15:17.779738 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:15:17.785834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:15:17.791632 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:15:17.791696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:15:17.806375 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:15:18.172016 systemd[1]: Switching root. Mar 7 01:15:18.237697 systemd-journald[177]: Journal stopped Mar 7 01:15:11.133082 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:15:11.133127 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:11.133147 kernel: BIOS-provided physical RAM map: Mar 7 01:15:11.133157 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:15:11.133167 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Mar 7 01:15:11.133175 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Mar 7 01:15:11.133188 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Mar 7 01:15:11.133199 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Mar 7 01:15:11.133215 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Mar 7 01:15:11.133227 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Mar 7 01:15:11.133239 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Mar 7 01:15:11.133253 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Mar 7 01:15:11.133264 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Mar 7 01:15:11.133276 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Mar 7 01:15:11.133294 kernel: printk: bootconsole [earlyser0] enabled Mar 7 01:15:11.133307 kernel: NX (Execute Disable) protection: active Mar 7 01:15:11.133324 kernel: APIC: Static calls initialized Mar 7 01:15:11.133340 kernel: efi: EFI v2.7 by Microsoft Mar 7 01:15:11.133352 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f421418 Mar 7 01:15:11.133363 kernel: SMBIOS 3.1.0 present. Mar 7 01:15:11.133376 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Mar 7 01:15:11.133390 kernel: Hypervisor detected: Microsoft Hyper-V Mar 7 01:15:11.133403 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Mar 7 01:15:11.133416 kernel: Hyper-V: Host Build 10.0.26102.1212-1-0 Mar 7 01:15:11.133429 kernel: Hyper-V: Nested features: 0x1e0101 Mar 7 01:15:11.133446 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Mar 7 01:15:11.133459 kernel: Hyper-V: Using hypercall for remote TLB flush Mar 7 01:15:11.133473 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:15:11.133486 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Mar 7 01:15:11.133501 kernel: tsc: Marking TSC unstable due to running on Hyper-V Mar 7 01:15:11.133515 kernel: tsc: Detected 2593.906 MHz processor Mar 7 01:15:11.133528 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:15:11.133542 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:15:11.133555 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Mar 7 01:15:11.133570 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:15:11.133583 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:15:11.133596 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Mar 7 01:15:11.133610 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Mar 7 01:15:11.133623 kernel: Using GB pages for direct mapping Mar 7 01:15:11.133636 kernel: Secure boot disabled Mar 7 01:15:11.133655 kernel: ACPI: Early table checksum verification disabled Mar 7 01:15:11.133671 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Mar 7 01:15:11.133682 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133696 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133710 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 7 01:15:11.133722 kernel: ACPI: FACS 0x000000003FFFE000 000040 Mar 7 01:15:11.133737 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133750 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133767 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133780 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133794 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133807 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 7 01:15:11.133820 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Mar 7 01:15:11.133833 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Mar 7 01:15:11.133847 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Mar 7 01:15:11.133862 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Mar 7 01:15:11.133876 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Mar 7 01:15:11.133893 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Mar 7 01:15:11.133907 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Mar 7 01:15:11.133922 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Mar 7 01:15:11.133937 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Mar 7 01:15:11.133951 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:15:11.133965 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:15:11.133979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Mar 7 01:15:11.133993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Mar 7 01:15:11.134007 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Mar 7 01:15:11.134025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Mar 7 01:15:11.134040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Mar 7 01:15:11.134053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Mar 7 01:15:11.134068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Mar 7 01:15:11.134082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Mar 7 01:15:11.134096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Mar 7 01:15:11.134110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Mar 7 01:15:11.139785 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Mar 7 01:15:11.139800 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Mar 7 01:15:11.139809 kernel: Zone ranges: Mar 7 01:15:11.139821 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:15:11.139829 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 7 01:15:11.139836 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:15:11.139848 kernel: Movable zone start for each node Mar 7 01:15:11.139856 kernel: Early memory node ranges Mar 7 01:15:11.139863 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:15:11.139873 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Mar 7 01:15:11.139885 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Mar 7 01:15:11.139892 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Mar 7 01:15:11.139903 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Mar 7 01:15:11.139912 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Mar 7 01:15:11.139919 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:15:11.139927 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:15:11.139938 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Mar 7 01:15:11.139946 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Mar 7 01:15:11.139953 kernel: ACPI: PM-Timer IO Port: 0x408 Mar 7 01:15:11.139965 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Mar 7 01:15:11.139975 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:15:11.139982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:15:11.139990 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:15:11.140001 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Mar 7 01:15:11.140009 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:15:11.140016 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Mar 7 01:15:11.140028 kernel: Booting paravirtualized kernel on Hyper-V Mar 7 01:15:11.140036 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:15:11.140046 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:15:11.140058 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:15:11.140066 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:15:11.140077 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:15:11.140084 kernel: Hyper-V: PV spinlocks enabled Mar 7 01:15:11.140092 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:15:11.140100 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:11.140118 kernel: random: crng init done Mar 7 01:15:11.140128 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 7 01:15:11.140139 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:15:11.140148 kernel: Fallback order for Node 0: 0 Mar 7 01:15:11.140155 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Mar 7 01:15:11.140165 kernel: Policy zone: Normal Mar 7 01:15:11.140174 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:15:11.140182 kernel: software IO TLB: area num 2. Mar 7 01:15:11.140190 kernel: Memory: 8066048K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 316920K reserved, 0K cma-reserved) Mar 7 01:15:11.140202 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:15:11.140218 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:15:11.140226 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:15:11.140238 kernel: Dynamic Preempt: voluntary Mar 7 01:15:11.140249 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:15:11.140257 kernel: rcu: RCU event tracing is enabled. Mar 7 01:15:11.140270 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:15:11.140278 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:15:11.140286 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:15:11.140297 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:15:11.140309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:15:11.140317 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:15:11.140329 kernel: Using NULL legacy PIC Mar 7 01:15:11.140337 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Mar 7 01:15:11.140345 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:15:11.140357 kernel: Console: colour dummy device 80x25 Mar 7 01:15:11.140365 kernel: printk: console [tty1] enabled Mar 7 01:15:11.140373 kernel: printk: console [ttyS0] enabled Mar 7 01:15:11.140387 kernel: printk: bootconsole [earlyser0] disabled Mar 7 01:15:11.140395 kernel: ACPI: Core revision 20230628 Mar 7 01:15:11.140403 kernel: Failed to register legacy timer interrupt Mar 7 01:15:11.140415 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:15:11.140423 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 7 01:15:11.140433 kernel: Hyper-V: Using IPI hypercalls Mar 7 01:15:11.140443 kernel: APIC: send_IPI() replaced with hv_send_ipi() Mar 7 01:15:11.140451 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Mar 7 01:15:11.140459 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Mar 7 01:15:11.140473 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Mar 7 01:15:11.140481 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Mar 7 01:15:11.140492 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Mar 7 01:15:11.140501 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Mar 7 01:15:11.140509 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:15:11.140519 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:15:11.140529 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:15:11.140537 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:15:11.140547 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:15:11.140557 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:15:11.140570 kernel: RETBleed: Vulnerable Mar 7 01:15:11.140580 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:15:11.140587 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:15:11.140599 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:15:11.140607 kernel: active return thunk: its_return_thunk Mar 7 01:15:11.140615 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:15:11.140626 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:15:11.140635 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:15:11.140643 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:15:11.140651 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:15:11.140665 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:15:11.140673 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:15:11.140681 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:15:11.140693 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 7 01:15:11.140701 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 7 01:15:11.140709 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 7 01:15:11.140721 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Mar 7 01:15:11.140729 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:15:11.140737 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:15:11.140749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:15:11.140757 kernel: landlock: Up and running. Mar 7 01:15:11.140764 kernel: SELinux: Initializing. Mar 7 01:15:11.140778 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.140786 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.140795 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:15:11.140806 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:11.140814 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:11.140822 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:15:11.140835 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:15:11.140843 kernel: signal: max sigframe size: 3632 Mar 7 01:15:11.140853 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:15:11.140865 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:15:11.140873 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:15:11.140883 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:15:11.140893 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:15:11.140901 kernel: .... node #0, CPUs: #1 Mar 7 01:15:11.140913 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Mar 7 01:15:11.140922 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:15:11.140930 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:15:11.140941 kernel: smpboot: Max logical packages: 1 Mar 7 01:15:11.140953 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Mar 7 01:15:11.140961 kernel: devtmpfs: initialized Mar 7 01:15:11.140973 kernel: x86/mm: Memory block size: 128MB Mar 7 01:15:11.140981 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Mar 7 01:15:11.140989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:15:11.141001 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:15:11.141009 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:15:11.141017 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:15:11.141029 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:15:11.141040 kernel: audit: type=2000 audit(1772846109.029:1): state=initialized audit_enabled=0 res=1 Mar 7 01:15:11.141052 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:15:11.141060 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:15:11.141068 kernel: cpuidle: using governor menu Mar 7 01:15:11.141080 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:15:11.141088 kernel: dca service started, version 1.12.1 Mar 7 01:15:11.141096 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Mar 7 01:15:11.141108 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Mar 7 01:15:11.141121 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:15:11.141132 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:15:11.141143 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:15:11.141152 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:15:11.141162 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:15:11.141172 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:15:11.141180 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:15:11.141187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:15:11.141200 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:15:11.141210 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:15:11.141218 kernel: ACPI: Interpreter enabled Mar 7 01:15:11.141230 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:15:11.141238 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:15:11.141246 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:15:11.141258 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 7 01:15:11.141266 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Mar 7 01:15:11.141274 kernel: iommu: Default domain type: Translated Mar 7 01:15:11.141286 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:15:11.141294 kernel: efivars: Registered efivars operations Mar 7 01:15:11.141305 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:15:11.141317 kernel: PCI: System does not support PCI Mar 7 01:15:11.141325 kernel: vgaarb: loaded Mar 7 01:15:11.141333 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Mar 7 01:15:11.141347 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:15:11.141355 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:15:11.141366 kernel: pnp: PnP ACPI init Mar 7 01:15:11.141375 kernel: pnp: PnP ACPI: found 3 devices Mar 7 01:15:11.141383 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:15:11.141397 kernel: NET: Registered PF_INET protocol family Mar 7 01:15:11.141406 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:15:11.141414 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 7 01:15:11.141426 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:15:11.141434 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:15:11.141442 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 7 01:15:11.141454 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 7 01:15:11.141462 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.141470 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 7 01:15:11.141485 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:15:11.141493 kernel: NET: Registered PF_XDP protocol family Mar 7 01:15:11.141501 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:15:11.141513 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 7 01:15:11.141521 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Mar 7 01:15:11.141533 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:15:11.141541 kernel: Initialise system trusted keyrings Mar 7 01:15:11.141549 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 7 01:15:11.141563 kernel: Key type asymmetric registered Mar 7 01:15:11.141571 kernel: Asymmetric key parser 'x509' registered Mar 7 01:15:11.141582 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:15:11.141591 kernel: io scheduler mq-deadline registered Mar 7 01:15:11.141599 kernel: io scheduler kyber registered Mar 7 01:15:11.141607 kernel: io scheduler bfq registered Mar 7 01:15:11.141619 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:15:11.141627 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:15:11.141637 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:15:11.141647 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 7 01:15:11.141657 kernel: i8042: PNP: No PS/2 controller found. Mar 7 01:15:11.141836 kernel: rtc_cmos 00:02: registered as rtc0 Mar 7 01:15:11.141999 kernel: rtc_cmos 00:02: setting system clock to 2026-03-07T01:15:10 UTC (1772846110) Mar 7 01:15:11.144199 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Mar 7 01:15:11.144226 kernel: intel_pstate: CPU model not supported Mar 7 01:15:11.144242 kernel: efifb: probing for efifb Mar 7 01:15:11.144258 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 7 01:15:11.144278 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 7 01:15:11.144293 kernel: efifb: scrolling: redraw Mar 7 01:15:11.144309 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:15:11.144324 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:15:11.144340 kernel: fb0: EFI VGA frame buffer device Mar 7 01:15:11.144355 kernel: pstore: Using crash dump compression: deflate Mar 7 01:15:11.144371 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:15:11.144386 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:15:11.144401 kernel: Segment Routing with IPv6 Mar 7 01:15:11.144419 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:15:11.144435 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:15:11.144450 kernel: Key type dns_resolver registered Mar 7 01:15:11.144465 kernel: IPI shorthand broadcast: enabled Mar 7 01:15:11.144481 kernel: sched_clock: Marking stable (915002600, 57650500)->(1199977400, -227324300) Mar 7 01:15:11.144496 kernel: registered taskstats version 1 Mar 7 01:15:11.144511 kernel: Loading compiled-in X.509 certificates Mar 7 01:15:11.144527 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:15:11.144542 kernel: Key type .fscrypt registered Mar 7 01:15:11.144559 kernel: Key type fscrypt-provisioning registered Mar 7 01:15:11.144575 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:15:11.144590 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:15:11.144605 kernel: ima: No architecture policies found Mar 7 01:15:11.144620 kernel: clk: Disabling unused clocks Mar 7 01:15:11.144635 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:15:11.144651 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:15:11.144666 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:15:11.144682 kernel: Run /init as init process Mar 7 01:15:11.144700 kernel: with arguments: Mar 7 01:15:11.144715 kernel: /init Mar 7 01:15:11.144730 kernel: with environment: Mar 7 01:15:11.144744 kernel: HOME=/ Mar 7 01:15:11.144759 kernel: TERM=linux Mar 7 01:15:11.144777 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:15:11.144796 systemd[1]: Detected virtualization microsoft. Mar 7 01:15:11.144813 systemd[1]: Detected architecture x86-64. Mar 7 01:15:11.144831 systemd[1]: Running in initrd. Mar 7 01:15:11.144847 systemd[1]: No hostname configured, using default hostname. Mar 7 01:15:11.144862 systemd[1]: Hostname set to . Mar 7 01:15:11.144879 systemd[1]: Initializing machine ID from random generator. Mar 7 01:15:11.144895 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:15:11.144911 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:11.144926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:11.144944 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:15:11.144962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:15:11.144979 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:15:11.144995 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:15:11.145014 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:15:11.145030 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:15:11.145047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:11.145064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:11.145085 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:15:11.145102 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:15:11.145135 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:15:11.145151 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:15:11.145167 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:15:11.145183 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:15:11.145200 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:15:11.145216 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:15:11.145232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:11.145253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:11.145269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:11.145285 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:15:11.145302 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:15:11.145318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:15:11.145335 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:15:11.145351 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:15:11.145367 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:15:11.145386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:15:11.145422 systemd-journald[177]: Collecting audit messages is disabled. Mar 7 01:15:11.145458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:11.145474 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:15:11.145493 systemd-journald[177]: Journal started Mar 7 01:15:11.145527 systemd-journald[177]: Runtime Journal (/run/log/journal/249b112e0b3140358a828c40b0e1e84b) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:15:11.154822 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:15:11.146350 systemd-modules-load[178]: Inserted module 'overlay' Mar 7 01:15:11.157618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:11.158196 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:15:11.168836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:11.190344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:11.213254 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:15:11.213298 kernel: Bridge firewalling registered Mar 7 01:15:11.197261 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:15:11.218042 systemd-modules-load[178]: Inserted module 'br_netfilter' Mar 7 01:15:11.226287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:15:11.235602 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:11.244299 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:11.252477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:15:11.258490 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:11.272262 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:15:11.281302 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:15:11.289351 dracut-cmdline[204]: dracut-dracut-053 Mar 7 01:15:11.293657 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:15:11.314619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:15:11.332410 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:11.342773 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:11.354584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:15:11.394752 systemd-resolved[265]: Positive Trust Anchors: Mar 7 01:15:11.394768 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:15:11.394821 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:15:11.435791 kernel: SCSI subsystem initialized Mar 7 01:15:11.426165 systemd-resolved[265]: Defaulting to hostname 'linux'. Mar 7 01:15:11.427308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:15:11.431138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:11.447206 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:15:11.459134 kernel: iscsi: registered transport (tcp) Mar 7 01:15:11.483475 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:15:11.483533 kernel: QLogic iSCSI HBA Driver Mar 7 01:15:11.520607 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:15:11.531290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:15:11.562393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:15:11.562480 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:15:11.566243 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:15:11.606136 kernel: raid6: avx512x4 gen() 18424 MB/s Mar 7 01:15:11.626129 kernel: raid6: avx512x2 gen() 18316 MB/s Mar 7 01:15:11.645123 kernel: raid6: avx512x1 gen() 18243 MB/s Mar 7 01:15:11.664126 kernel: raid6: avx2x4 gen() 18035 MB/s Mar 7 01:15:11.684130 kernel: raid6: avx2x2 gen() 18101 MB/s Mar 7 01:15:11.704728 kernel: raid6: avx2x1 gen() 13826 MB/s Mar 7 01:15:11.704758 kernel: raid6: using algorithm avx512x4 gen() 18424 MB/s Mar 7 01:15:11.726754 kernel: raid6: .... xor() 7413 MB/s, rmw enabled Mar 7 01:15:11.726780 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:15:11.750136 kernel: xor: automatically using best checksumming function avx Mar 7 01:15:11.898149 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:15:11.907921 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:15:11.919288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:11.938463 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:15:11.943054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:11.959306 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:15:11.972386 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Mar 7 01:15:11.998618 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:15:12.014319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:15:12.059684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:12.072286 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:15:12.091374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:15:12.099381 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:15:12.103040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:12.113643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:15:12.124709 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:15:12.150262 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:15:12.162215 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:15:12.179188 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:15:12.185130 kernel: AES CTR mode by8 optimization enabled Mar 7 01:15:12.186850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:15:12.187011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:12.191124 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:12.194752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:12.194925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:12.198796 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:12.220458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:12.241241 kernel: hv_vmbus: Vmbus version:5.2 Mar 7 01:15:12.232680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:12.232808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:12.254305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:12.282409 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 7 01:15:12.282455 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 7 01:15:12.291135 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 01:15:12.296131 kernel: hv_vmbus: registering driver hid_hyperv Mar 7 01:15:12.303982 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 7 01:15:12.304016 kernel: hv_vmbus: registering driver hv_storvsc Mar 7 01:15:12.313257 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 7 01:15:12.313428 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 7 01:15:12.313443 kernel: scsi host0: storvsc_host_t Mar 7 01:15:12.317670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:12.331072 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 7 01:15:12.341139 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 7 01:15:12.341202 kernel: scsi host1: storvsc_host_t Mar 7 01:15:12.341232 kernel: PTP clock support registered Mar 7 01:15:12.345229 kernel: hv_vmbus: registering driver hv_netvsc Mar 7 01:15:12.347219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:15:12.358042 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 7 01:15:12.385831 kernel: hv_utils: Registering HyperV Utility Driver Mar 7 01:15:12.385885 kernel: hv_vmbus: registering driver hv_utils Mar 7 01:15:12.386930 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:12.400661 kernel: hv_utils: Shutdown IC version 3.2 Mar 7 01:15:12.400696 kernel: hv_utils: Heartbeat IC version 3.0 Mar 7 01:15:12.400711 kernel: hv_utils: TimeSync IC version 4.0 Mar 7 01:15:12.548649 systemd-resolved[265]: Clock change detected. Flushing caches. Mar 7 01:15:12.559776 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 7 01:15:12.560018 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:15:12.562171 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 7 01:15:12.578624 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 7 01:15:12.578941 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 7 01:15:12.583299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#3 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:15:12.585851 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 7 01:15:12.592173 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 7 01:15:12.592736 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 7 01:15:12.599175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:15:12.603061 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 7 01:15:12.620228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#74 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:15:12.670314 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: VF slot 1 added Mar 7 01:15:12.678168 kernel: hv_vmbus: registering driver hv_pci Mar 7 01:15:12.683917 kernel: hv_pci fb9eb66e-f475-4ab4-931f-01759501c793: PCI VMBus probing: Using version 0x10004 Mar 7 01:15:12.684127 kernel: hv_pci fb9eb66e-f475-4ab4-931f-01759501c793: PCI host bridge to bus f475:00 Mar 7 01:15:12.690730 kernel: pci_bus f475:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Mar 7 01:15:12.697175 kernel: pci_bus f475:00: No busn resource found for root bus, will use [bus 00-ff] Mar 7 01:15:12.709175 kernel: pci f475:00:02.0: [15b3:1016] type 00 class 0x020000 Mar 7 01:15:12.714158 kernel: pci f475:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:15:12.720291 kernel: pci f475:00:02.0: enabling Extended Tags Mar 7 01:15:12.733253 kernel: pci f475:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f475:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Mar 7 01:15:12.739172 kernel: pci_bus f475:00: busn_res: [bus 00-ff] end is updated to 00 Mar 7 01:15:12.739361 kernel: pci f475:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Mar 7 01:15:12.756182 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (455) Mar 7 01:15:12.770181 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (442) Mar 7 01:15:12.796243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 7 01:15:12.815299 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 7 01:15:12.832760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:15:12.848279 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 7 01:15:12.859288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 7 01:15:12.879338 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:15:12.900182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:15:13.080434 kernel: mlx5_core f475:00:02.0: enabling device (0000 -> 0002) Mar 7 01:15:13.087195 kernel: mlx5_core f475:00:02.0: firmware version: 14.30.5026 Mar 7 01:15:13.303633 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: VF registering: eth1 Mar 7 01:15:13.304013 kernel: mlx5_core f475:00:02.0 eth1: joined to eth0 Mar 7 01:15:13.308339 kernel: mlx5_core f475:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Mar 7 01:15:13.319170 kernel: mlx5_core f475:00:02.0 enP62581s1: renamed from eth1 Mar 7 01:15:13.923175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 01:15:13.924201 disk-uuid[596]: The operation has completed successfully. Mar 7 01:15:14.009353 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:15:14.009467 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:15:14.036292 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:15:14.045429 sh[719]: Success Mar 7 01:15:14.068058 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:15:14.156112 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:15:14.177279 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:15:14.178767 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:15:14.206503 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:15:14.206562 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:14.210800 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:15:14.214114 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:15:14.216932 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:15:14.279422 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:15:14.285273 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:15:14.301545 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:15:14.307544 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:15:14.329170 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:14.329213 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:14.334939 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:15:14.350212 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:15:14.360842 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:15:14.367231 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:14.374391 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:15:14.383313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:15:14.420196 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:15:14.432401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:15:14.459108 systemd-networkd[903]: lo: Link UP Mar 7 01:15:14.459119 systemd-networkd[903]: lo: Gained carrier Mar 7 01:15:14.461310 systemd-networkd[903]: Enumeration completed Mar 7 01:15:14.461591 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:15:14.465243 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:14.465247 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:15:14.465645 systemd[1]: Reached target network.target - Network. Mar 7 01:15:14.543204 kernel: mlx5_core f475:00:02.0 enP62581s1: Link up Mar 7 01:15:14.577073 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: Data path switched to VF: enP62581s1 Mar 7 01:15:14.576032 systemd-networkd[903]: enP62581s1: Link UP Mar 7 01:15:14.576185 systemd-networkd[903]: eth0: Link UP Mar 7 01:15:14.576390 systemd-networkd[903]: eth0: Gained carrier Mar 7 01:15:14.576403 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:14.582369 systemd-networkd[903]: enP62581s1: Gained carrier Mar 7 01:15:14.624215 systemd-networkd[903]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:15:14.640645 ignition[854]: Ignition 2.19.0 Mar 7 01:15:14.640658 ignition[854]: Stage: fetch-offline Mar 7 01:15:14.640696 ignition[854]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.640707 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.640816 ignition[854]: parsed url from cmdline: "" Mar 7 01:15:14.640822 ignition[854]: no config URL provided Mar 7 01:15:14.640828 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:15:14.640839 ignition[854]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:15:14.652788 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:15:14.640846 ignition[854]: failed to fetch config: resource requires networking Mar 7 01:15:14.641082 ignition[854]: Ignition finished successfully Mar 7 01:15:14.679335 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:15:14.697006 ignition[912]: Ignition 2.19.0 Mar 7 01:15:14.697019 ignition[912]: Stage: fetch Mar 7 01:15:14.697246 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.697261 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.697373 ignition[912]: parsed url from cmdline: "" Mar 7 01:15:14.697376 ignition[912]: no config URL provided Mar 7 01:15:14.697384 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:15:14.697393 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:15:14.697411 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 7 01:15:14.839421 ignition[912]: GET result: OK Mar 7 01:15:14.839524 ignition[912]: config has been read from IMDS userdata Mar 7 01:15:14.839557 ignition[912]: parsing config with SHA512: 8b9d1b49d3ca2472c4c839b54e5137565b04fb6df2201db172f12993a2899c1e918482c1a0f170de161e0155295a330a7466ce8f31cb0289e622eec804deb628 Mar 7 01:15:14.847268 unknown[912]: fetched base config from "system" Mar 7 01:15:14.847283 unknown[912]: fetched base config from "system" Mar 7 01:15:14.847296 unknown[912]: fetched user config from "azure" Mar 7 01:15:14.855446 ignition[912]: fetch: fetch complete Mar 7 01:15:14.855456 ignition[912]: fetch: fetch passed Mar 7 01:15:14.855533 ignition[912]: Ignition finished successfully Mar 7 01:15:14.862328 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:15:14.871313 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:15:14.887904 ignition[918]: Ignition 2.19.0 Mar 7 01:15:14.887917 ignition[918]: Stage: kargs Mar 7 01:15:14.888131 ignition[918]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.888156 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.889512 ignition[918]: kargs: kargs passed Mar 7 01:15:14.889560 ignition[918]: Ignition finished successfully Mar 7 01:15:14.899457 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:15:14.913378 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:15:14.928884 ignition[924]: Ignition 2.19.0 Mar 7 01:15:14.928896 ignition[924]: Stage: disks Mar 7 01:15:14.929113 ignition[924]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:14.929131 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:14.930088 ignition[924]: disks: disks passed Mar 7 01:15:14.936155 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:15:14.930131 ignition[924]: Ignition finished successfully Mar 7 01:15:14.947392 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:15:14.953342 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:15:14.956924 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:15:14.963318 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:15:14.970884 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:15:14.980310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:15:15.007532 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 7 01:15:15.014143 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:15:15.029123 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:15:15.129169 kernel: EXT4-fs (sda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:15:15.129338 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:15:15.134694 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:15:15.155414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:15:15.162731 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:15:15.170317 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 01:15:15.171522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:15:15.171551 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:15:15.179652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:15:15.212524 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 7 01:15:15.212556 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:15.212569 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:15.212590 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:15:15.201444 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:15:15.224926 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:15:15.226291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:15:15.360980 coreos-metadata[945]: Mar 07 01:15:15.360 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:15:15.367378 coreos-metadata[945]: Mar 07 01:15:15.367 INFO Fetch successful Mar 7 01:15:15.367378 coreos-metadata[945]: Mar 07 01:15:15.367 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:15:15.377341 coreos-metadata[945]: Mar 07 01:15:15.377 INFO Fetch successful Mar 7 01:15:15.389167 coreos-metadata[945]: Mar 07 01:15:15.386 INFO wrote hostname ci-4081.3.6-n-baf9cf72b8 to /sysroot/etc/hostname Mar 7 01:15:15.394449 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:15:15.397967 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:15:15.411515 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:15:15.422258 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:15:15.429941 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:15:15.696291 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:15:15.709365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:15:15.715311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:15:15.728349 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:15:15.735700 kernel: BTRFS info (device sda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:15.757582 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:15:15.774655 ignition[1064]: INFO : Ignition 2.19.0 Mar 7 01:15:15.774655 ignition[1064]: INFO : Stage: mount Mar 7 01:15:15.779435 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:15.779435 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:15.779435 ignition[1064]: INFO : mount: mount passed Mar 7 01:15:15.779435 ignition[1064]: INFO : Ignition finished successfully Mar 7 01:15:15.786085 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:15:15.792541 systemd-networkd[903]: eth0: Gained IPv6LL Mar 7 01:15:15.799459 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:15:15.809989 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:15:15.834224 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Mar 7 01:15:15.834303 kernel: BTRFS info (device sda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:15:15.837985 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:15:15.841559 kernel: BTRFS info (device sda6): using free space tree Mar 7 01:15:15.885164 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 01:15:15.886437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:15:15.913333 ignition[1089]: INFO : Ignition 2.19.0 Mar 7 01:15:15.913333 ignition[1089]: INFO : Stage: files Mar 7 01:15:15.918421 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:15.918421 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:15.918421 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:15:15.929065 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:15:15.929065 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:15:15.944480 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:15:15.948881 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:15:15.953029 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:15:15.952345 unknown[1089]: wrote ssh authorized keys file for user: core Mar 7 01:15:15.960029 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:15:15.966407 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:15:16.000445 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:15:16.101529 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:15:16.108023 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:15:16.108023 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:15:16.108023 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:15:16.123966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:15:16.123966 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:15:16.134618 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:15:16.139971 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:15:16.145236 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:15:16.150914 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.156508 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:15:16.447081 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:15:16.802168 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:15:16.802168 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:15:16.812469 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:15:16.818173 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:15:16.822989 ignition[1089]: INFO : files: files passed Mar 7 01:15:16.822989 ignition[1089]: INFO : Ignition finished successfully Mar 7 01:15:16.828130 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:15:16.851728 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:15:16.870373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:15:16.878026 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:15:16.881039 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:15:16.895020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:16.895020 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:16.904365 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:15:16.910345 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:15:16.914730 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:15:16.931571 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:15:16.959003 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:15:16.959111 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:15:16.965863 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:15:16.972486 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:15:16.980832 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:15:16.992640 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:15:17.008524 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:15:17.021286 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:15:17.034310 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:17.035739 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:17.036334 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:15:17.036814 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:15:17.036946 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:15:17.037861 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:15:17.038391 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:15:17.038899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:15:17.039392 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:15:17.039905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:15:17.040978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:15:17.041499 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:15:17.042107 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:15:17.042613 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:15:17.043119 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:15:17.043592 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:15:17.043725 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:15:17.044639 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:17.045297 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:17.045739 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:15:17.089242 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:17.097299 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:15:17.101903 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:15:17.166532 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:15:17.166736 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:15:17.177802 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:15:17.177997 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:15:17.183735 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 01:15:17.183896 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 01:15:17.198453 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:15:17.215107 ignition[1141]: INFO : Ignition 2.19.0 Mar 7 01:15:17.215107 ignition[1141]: INFO : Stage: umount Mar 7 01:15:17.222902 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:15:17.222902 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 7 01:15:17.222902 ignition[1141]: INFO : umount: umount passed Mar 7 01:15:17.222902 ignition[1141]: INFO : Ignition finished successfully Mar 7 01:15:17.215463 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:15:17.222634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:15:17.222804 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:17.234597 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:15:17.234704 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:15:17.238943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:15:17.240084 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:15:17.240196 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:15:17.249331 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:15:17.249583 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:15:17.270069 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:15:17.270132 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:15:17.277793 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:15:17.277838 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:15:17.278726 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:15:17.278760 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:15:17.279933 systemd[1]: Stopped target network.target - Network. Mar 7 01:15:17.280442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:15:17.280483 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:15:17.280964 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:15:17.281456 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:15:17.328236 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:17.334123 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:15:17.339236 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:15:17.342096 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:15:17.342172 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:15:17.347738 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:15:17.347776 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:15:17.350685 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:15:17.350744 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:15:17.357668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:15:17.359919 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:15:17.385499 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:15:17.391041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:15:17.397201 systemd-networkd[903]: eth0: DHCPv6 lease lost Mar 7 01:15:17.399790 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:15:17.399917 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:15:17.404729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:15:17.404822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:17.427610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:15:17.430747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:15:17.430819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:15:17.435068 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:17.436633 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:15:17.436884 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:15:17.461679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:15:17.461743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:17.464766 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:15:17.464820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:17.467400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:15:17.467447 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:17.483468 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:15:17.483604 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:17.492855 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:15:17.492938 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:17.498197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:15:17.498228 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:17.501293 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:15:17.501345 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:15:17.514958 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:15:17.515009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:15:17.539313 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: Data path switched from VF: enP62581s1 Mar 7 01:15:17.537970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:15:17.538023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:15:17.553328 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:15:17.560064 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:15:17.560134 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:17.566032 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:15:17.566095 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:15:17.580335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:15:17.580374 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:17.585126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:15:17.585179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:17.599138 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:15:17.602135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:15:17.607642 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:15:17.607760 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:15:17.779605 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:15:17.779738 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:15:17.785834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:15:17.791632 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:15:17.791696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:15:17.806375 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:15:18.172016 systemd[1]: Switching root. Mar 7 01:15:18.237697 systemd-journald[177]: Journal stopped Mar 7 01:15:20.672983 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Mar 7 01:15:20.673023 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:15:20.673045 kernel: SELinux: policy capability open_perms=1 Mar 7 01:15:20.673059 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:15:20.673073 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:15:20.673087 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:15:20.673102 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:15:20.673117 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:15:20.673134 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:15:20.676210 kernel: audit: type=1403 audit(1772846118.856:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:15:20.676239 systemd[1]: Successfully loaded SELinux policy in 69.428ms. Mar 7 01:15:20.676257 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.605ms. Mar 7 01:15:20.676275 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:15:20.676291 systemd[1]: Detected virtualization microsoft. Mar 7 01:15:20.676313 systemd[1]: Detected architecture x86-64. Mar 7 01:15:20.676329 systemd[1]: Detected first boot. Mar 7 01:15:20.676345 systemd[1]: Hostname set to . Mar 7 01:15:20.676361 systemd[1]: Initializing machine ID from random generator. Mar 7 01:15:20.676377 zram_generator::config[1183]: No configuration found. Mar 7 01:15:20.676398 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:15:20.676415 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:15:20.676430 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:15:20.676447 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:15:20.676465 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:15:20.676482 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:15:20.676499 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:15:20.676520 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:15:20.676537 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:15:20.676554 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:15:20.676571 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:15:20.676587 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:15:20.676605 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:15:20.676623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:15:20.676640 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:15:20.676660 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:15:20.676678 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:15:20.676695 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:15:20.676713 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:15:20.676729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:15:20.676748 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:15:20.676770 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:15:20.676788 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:15:20.676806 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:15:20.676827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:15:20.676849 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:15:20.676867 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:15:20.676885 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:15:20.676902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:15:20.676921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:15:20.676939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:15:20.676960 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:15:20.676979 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:15:20.676997 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:15:20.677018 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:15:20.677036 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:15:20.677057 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:15:20.677077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:20.677094 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:15:20.677113 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:15:20.677131 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:15:20.677161 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:15:20.677188 systemd[1]: Reached target machines.target - Containers. Mar 7 01:15:20.677207 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:15:20.677228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:15:20.677248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:15:20.677266 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:15:20.677285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:15:20.677304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:15:20.677323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:15:20.677341 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:15:20.677359 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:15:20.677379 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:15:20.677397 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:15:20.677414 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:15:20.677424 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:15:20.677435 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:15:20.677453 kernel: ACPI: bus type drm_connector registered Mar 7 01:15:20.677470 kernel: loop: module loaded Mar 7 01:15:20.677485 kernel: fuse: init (API version 7.39) Mar 7 01:15:20.677503 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:15:20.677525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:15:20.677546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:15:20.677565 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:15:20.677609 systemd-journald[1282]: Collecting audit messages is disabled. Mar 7 01:15:20.677655 systemd-journald[1282]: Journal started Mar 7 01:15:20.677691 systemd-journald[1282]: Runtime Journal (/run/log/journal/30a63c49e0cf4e71b13f7ea7822a7128) is 8.0M, max 158.7M, 150.7M free. Mar 7 01:15:20.686217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:15:20.041027 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:15:20.079141 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 01:15:20.079533 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:15:20.693163 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:15:20.693200 systemd[1]: Stopped verity-setup.service. Mar 7 01:15:20.707181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:20.714517 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:15:20.715114 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:15:20.718255 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:15:20.721746 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:15:20.725333 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:15:20.728834 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:15:20.732558 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:15:20.735766 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:15:20.740240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:15:20.744888 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:15:20.745333 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:15:20.749595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:15:20.749922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:15:20.754242 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:15:20.754486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:15:20.758437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:15:20.758716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:15:20.763348 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:15:20.763541 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:15:20.767486 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:15:20.767779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:15:20.771699 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:15:20.775740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:15:20.780401 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:15:20.801328 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:15:20.813305 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:15:20.818076 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:15:20.821435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:15:20.821476 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:15:20.826497 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:15:20.831465 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:15:20.836286 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:15:20.839918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:15:20.841798 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:15:20.848387 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:15:20.853253 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:15:20.856033 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:15:20.859641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:15:20.868329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:15:20.874328 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:15:20.881593 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:15:20.891861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:15:20.896814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:15:20.903309 systemd-journald[1282]: Time spent on flushing to /var/log/journal/30a63c49e0cf4e71b13f7ea7822a7128 is 32.487ms for 954 entries. Mar 7 01:15:20.903309 systemd-journald[1282]: System Journal (/var/log/journal/30a63c49e0cf4e71b13f7ea7822a7128) is 8.0M, max 2.6G, 2.6G free. Mar 7 01:15:21.027386 systemd-journald[1282]: Received client request to flush runtime journal. Mar 7 01:15:21.027458 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:15:20.906781 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:15:20.914583 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:15:20.919486 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:15:20.934675 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:15:20.949349 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:15:20.960309 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:15:20.964517 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:20.996292 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:15:21.031595 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:15:21.044663 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Mar 7 01:15:21.044689 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Mar 7 01:15:21.053259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:15:21.074808 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:15:21.079583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:15:21.086418 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:15:21.173619 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:15:21.182168 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:15:21.186614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:15:21.208547 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Mar 7 01:15:21.208572 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Mar 7 01:15:21.212934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:15:21.226177 kernel: loop1: detected capacity change from 0 to 31056 Mar 7 01:15:21.347136 kernel: loop2: detected capacity change from 0 to 228704 Mar 7 01:15:21.386457 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:15:21.548177 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:15:21.573318 kernel: loop5: detected capacity change from 0 to 31056 Mar 7 01:15:21.600636 kernel: loop6: detected capacity change from 0 to 228704 Mar 7 01:15:21.623769 kernel: loop7: detected capacity change from 0 to 140768 Mar 7 01:15:21.653083 (sd-merge)[1347]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 7 01:15:21.653824 (sd-merge)[1347]: Merged extensions into '/usr'. Mar 7 01:15:21.660712 systemd[1]: Reloading requested from client PID 1319 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:15:21.660728 systemd[1]: Reloading... Mar 7 01:15:21.731180 zram_generator::config[1372]: No configuration found. Mar 7 01:15:21.933315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:22.001203 systemd[1]: Reloading finished in 339 ms. Mar 7 01:15:22.030652 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:15:22.035040 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:15:22.048336 systemd[1]: Starting ensure-sysext.service... Mar 7 01:15:22.054278 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:15:22.059420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:15:22.099069 systemd[1]: Reloading requested from client PID 1432 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:15:22.099089 systemd[1]: Reloading... Mar 7 01:15:22.101564 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:15:22.102499 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:15:22.105562 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:15:22.108020 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Mar 7 01:15:22.109587 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Mar 7 01:15:22.114928 systemd-udevd[1434]: Using default interface naming scheme 'v255'. Mar 7 01:15:22.119704 systemd-tmpfiles[1433]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:15:22.119716 systemd-tmpfiles[1433]: Skipping /boot Mar 7 01:15:22.134626 systemd-tmpfiles[1433]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:15:22.134749 systemd-tmpfiles[1433]: Skipping /boot Mar 7 01:15:22.221904 zram_generator::config[1459]: No configuration found. Mar 7 01:15:22.418342 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:15:22.479819 kernel: hv_vmbus: registering driver hv_balloon Mar 7 01:15:22.479923 kernel: hv_vmbus: registering driver hyperv_fb Mar 7 01:15:22.487167 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 7 01:15:22.487243 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 7 01:15:22.497261 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 7 01:15:22.511793 kernel: Console: switching to colour dummy device 80x25 Mar 7 01:15:22.522208 kernel: Console: switching to colour frame buffer device 128x48 Mar 7 01:15:22.609218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#138 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 7 01:15:22.670465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:22.736701 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:15:22.736873 systemd[1]: Reloading finished in 637 ms. Mar 7 01:15:22.755592 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:15:22.767722 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:15:22.798809 systemd[1]: Finished ensure-sysext.service. Mar 7 01:15:22.805858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:22.808372 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:15:22.813961 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:15:22.817888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:15:22.820360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:15:22.827714 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:15:22.834392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:15:22.849345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:15:22.855083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:15:22.862892 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:15:22.881335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:15:22.904387 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:15:22.910242 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:15:22.925368 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:15:22.940362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:15:22.945705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:15:22.946767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:15:22.947481 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:15:22.952699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:15:22.953876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:15:22.962946 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:15:22.963136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:15:22.982858 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:15:22.982931 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:15:22.991837 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:15:23.001028 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:15:23.001641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:15:23.196537 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:15:23.203622 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1470) Mar 7 01:15:23.283304 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 7 01:15:23.296347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:15:23.303464 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:15:23.348166 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Mar 7 01:15:23.368111 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:15:23.388497 augenrules[1639]: No rules Mar 7 01:15:23.390615 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:15:23.786547 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:15:23.792529 systemd-resolved[1585]: Positive Trust Anchors: Mar 7 01:15:23.792561 systemd-resolved[1585]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:15:23.792614 systemd-resolved[1585]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:15:23.797318 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:15:23.829761 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:15:23.836572 systemd-networkd[1579]: lo: Link UP Mar 7 01:15:23.836583 systemd-networkd[1579]: lo: Gained carrier Mar 7 01:15:23.839683 systemd-networkd[1579]: Enumeration completed Mar 7 01:15:23.840065 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:23.840076 systemd-networkd[1579]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:15:23.840261 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:15:23.849317 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:15:23.874575 systemd-resolved[1585]: Using system hostname 'ci-4081.3.6-n-baf9cf72b8'. Mar 7 01:15:23.897163 kernel: mlx5_core f475:00:02.0 enP62581s1: Link up Mar 7 01:15:23.917166 kernel: hv_netvsc 7c1e5220-df85-7c1e-5220-df857c1e5220 eth0: Data path switched to VF: enP62581s1 Mar 7 01:15:23.919113 systemd-networkd[1579]: enP62581s1: Link UP Mar 7 01:15:23.919312 systemd-networkd[1579]: eth0: Link UP Mar 7 01:15:23.919318 systemd-networkd[1579]: eth0: Gained carrier Mar 7 01:15:23.919342 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:23.920349 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:15:23.921733 systemd[1]: Reached target network.target - Network. Mar 7 01:15:23.922140 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:15:23.931761 systemd-networkd[1579]: enP62581s1: Gained carrier Mar 7 01:15:23.938794 lvm[1649]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:15:23.968211 systemd-networkd[1579]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:15:23.969596 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:15:23.971845 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:15:23.977609 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:15:23.987411 lvm[1653]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:15:24.041259 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:15:24.582323 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:15:24.584024 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:15:24.894773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:15:25.069420 systemd-networkd[1579]: eth0: Gained IPv6LL Mar 7 01:15:25.072457 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:15:25.077042 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:15:25.438817 ldconfig[1314]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:15:25.733476 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:15:25.742331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:15:25.753114 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:15:25.756971 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:15:25.760518 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:15:25.764349 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:15:25.768372 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:15:25.771765 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:15:25.776092 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:15:25.780164 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:15:25.780212 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:15:25.782997 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:15:25.786385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:15:25.791356 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:15:25.804053 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:15:25.807945 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:15:25.811379 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:15:25.814311 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:15:25.817141 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:15:25.817193 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:15:25.824240 systemd[1]: Starting chronyd.service - NTP client/server... Mar 7 01:15:25.830487 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:15:25.841362 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:15:25.849418 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:15:25.858403 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:15:25.864767 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:15:25.868088 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:15:25.868138 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 7 01:15:25.877375 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 7 01:15:25.881561 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 7 01:15:25.884228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:25.893686 KVP[1673]: KVP starting; pid is:1673 Mar 7 01:15:25.895975 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 7 01:15:25.897346 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:15:25.905525 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:15:25.910313 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:15:25.919333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:15:25.925392 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:15:25.940019 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:15:25.945111 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:15:25.945767 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:15:25.949239 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:15:25.959248 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:15:25.968706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:15:25.969084 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:15:25.983741 jq[1671]: false Mar 7 01:15:25.983423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:15:25.984594 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:15:25.990304 jq[1685]: true Mar 7 01:15:26.006251 jq[1696]: true Mar 7 01:15:26.052343 KVP[1673]: KVP LIC Version: 3.1 Mar 7 01:15:26.054165 kernel: hv_utils: KVP IC version 4.0 Mar 7 01:15:26.056119 (ntainerd)[1713]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:15:26.184715 chronyd[1721]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 7 01:15:26.191044 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:15:26.191342 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:15:26.290195 chronyd[1721]: Timezone right/UTC failed leap second check, ignoring Mar 7 01:15:26.290456 chronyd[1721]: Loaded seccomp filter (level 2) Mar 7 01:15:26.296035 extend-filesystems[1672]: Found loop4 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found loop5 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found loop6 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found loop7 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda1 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda2 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda3 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found usr Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda4 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda6 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda7 Mar 7 01:15:26.296035 extend-filesystems[1672]: Found sda9 Mar 7 01:15:26.296035 extend-filesystems[1672]: Checking size of /dev/sda9 Mar 7 01:15:26.291990 systemd[1]: Started chronyd.service - NTP client/server. Mar 7 01:15:26.333285 tar[1689]: linux-amd64/LICENSE Mar 7 01:15:26.333836 tar[1689]: linux-amd64/helm Mar 7 01:15:26.336722 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:15:26.407459 update_engine[1684]: I20260307 01:15:26.407004 1684 main.cc:92] Flatcar Update Engine starting Mar 7 01:15:26.696536 systemd-logind[1683]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 7 01:15:26.697172 systemd-logind[1683]: New seat seat0. Mar 7 01:15:26.699558 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:15:26.792668 extend-filesystems[1672]: Old size kept for /dev/sda9 Mar 7 01:15:26.792668 extend-filesystems[1672]: Found sr0 Mar 7 01:15:26.795812 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:15:26.796020 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:15:26.849168 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1473) Mar 7 01:15:26.876202 dbus-daemon[1668]: [system] SELinux support is enabled Mar 7 01:15:26.876426 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:15:26.890550 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:15:26.890601 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:15:26.909257 dbus-daemon[1668]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:15:26.896976 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:15:26.897002 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:15:26.920562 update_engine[1684]: I20260307 01:15:26.919123 1684 update_check_scheduler.cc:74] Next update check in 4m59s Mar 7 01:15:26.925343 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:15:26.951443 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:15:27.105332 tar[1689]: linux-amd64/README.md Mar 7 01:15:27.120463 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:15:27.248041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:27.252881 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:27.284180 sshd_keygen[1733]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:15:27.310996 coreos-metadata[1667]: Mar 07 01:15:27.310 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 7 01:15:27.314073 coreos-metadata[1667]: Mar 07 01:15:27.313 INFO Fetch successful Mar 7 01:15:27.314312 coreos-metadata[1667]: Mar 07 01:15:27.314 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 7 01:15:27.319229 coreos-metadata[1667]: Mar 07 01:15:27.319 INFO Fetch successful Mar 7 01:15:27.320106 coreos-metadata[1667]: Mar 07 01:15:27.319 INFO Fetching http://168.63.129.16/machine/02b089b2-98df-4efc-9381-f5e26dcabd08/c32dd79b%2Dee6e%2D43bf%2Da70c%2Da8787aeb26b4.%5Fci%2D4081.3.6%2Dn%2Dbaf9cf72b8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 7 01:15:27.321995 coreos-metadata[1667]: Mar 07 01:15:27.321 INFO Fetch successful Mar 7 01:15:27.322143 coreos-metadata[1667]: Mar 07 01:15:27.322 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 7 01:15:27.323962 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:15:27.331296 coreos-metadata[1667]: Mar 07 01:15:27.330 INFO Fetch successful Mar 7 01:15:27.335092 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:15:27.348401 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 7 01:15:27.366796 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:15:27.367020 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:15:27.379480 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:15:27.383447 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:15:27.387190 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:15:27.551323 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 7 01:15:27.582815 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:15:27.596864 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:15:27.607558 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:15:27.610789 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:15:28.073455 kubelet[1775]: E0307 01:15:27.911829 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:27.914222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:27.914444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:28.192866 locksmithd[1766]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:15:28.233888 bash[1723]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:15:28.235454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:15:28.240600 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:15:29.319652 containerd[1713]: time="2026-03-07T01:15:29.319566000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:15:29.342232 containerd[1713]: time="2026-03-07T01:15:29.342161800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:29.343721 containerd[1713]: time="2026-03-07T01:15:29.343684900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:29.343721 containerd[1713]: time="2026-03-07T01:15:29.343716700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:15:29.343859 containerd[1713]: time="2026-03-07T01:15:29.343735700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:15:29.343945 containerd[1713]: time="2026-03-07T01:15:29.343922300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:15:29.344006 containerd[1713]: time="2026-03-07T01:15:29.343949100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344072 containerd[1713]: time="2026-03-07T01:15:29.344050900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344128 containerd[1713]: time="2026-03-07T01:15:29.344070200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344334 containerd[1713]: time="2026-03-07T01:15:29.344310300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344411 containerd[1713]: time="2026-03-07T01:15:29.344332700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344411 containerd[1713]: time="2026-03-07T01:15:29.344367700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344411 containerd[1713]: time="2026-03-07T01:15:29.344380900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344512 containerd[1713]: time="2026-03-07T01:15:29.344495200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344727 containerd[1713]: time="2026-03-07T01:15:29.344697800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344870 containerd[1713]: time="2026-03-07T01:15:29.344845000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:15:29.344870 containerd[1713]: time="2026-03-07T01:15:29.344866600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:15:29.344984 containerd[1713]: time="2026-03-07T01:15:29.344963800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:15:29.345046 containerd[1713]: time="2026-03-07T01:15:29.345026200Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.741235200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.741316400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.741346100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.741367100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.741388900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.741590200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.741926600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.742087400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.742114300Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.742134800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.742173100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.742191700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.742209600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743191 containerd[1713]: time="2026-03-07T01:15:29.742227500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742248600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742266500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742286700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742305800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742334000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742356400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742375000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742393500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742417200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742436000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742453800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742471700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742488900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.743785 containerd[1713]: time="2026-03-07T01:15:29.742519700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742543300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742564200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742589800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742621300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742650700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742668400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742684000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742739500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742766300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742783600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742802200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742818000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742840100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:15:29.744283 containerd[1713]: time="2026-03-07T01:15:29.742853700Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:15:29.744857 containerd[1713]: time="2026-03-07T01:15:29.742867900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:15:29.744903 containerd[1713]: time="2026-03-07T01:15:29.743627500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:15:29.744903 containerd[1713]: time="2026-03-07T01:15:29.743793300Z" level=info msg="Connect containerd service" Mar 7 01:15:29.744903 containerd[1713]: time="2026-03-07T01:15:29.743847800Z" level=info msg="using legacy CRI server" Mar 7 01:15:29.744903 containerd[1713]: time="2026-03-07T01:15:29.743872300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:15:29.744903 containerd[1713]: time="2026-03-07T01:15:29.744043900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:15:29.745380 containerd[1713]: time="2026-03-07T01:15:29.745336600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:15:29.745953 containerd[1713]: time="2026-03-07T01:15:29.745592000Z" level=info msg="Start subscribing containerd event" Mar 7 01:15:29.745953 containerd[1713]: time="2026-03-07T01:15:29.745660600Z" level=info msg="Start recovering state" Mar 7 01:15:29.745953 containerd[1713]: time="2026-03-07T01:15:29.745730600Z" level=info msg="Start event monitor" Mar 7 01:15:29.745953 containerd[1713]: time="2026-03-07T01:15:29.745768800Z" level=info msg="Start snapshots syncer" Mar 7 01:15:29.745953 containerd[1713]: time="2026-03-07T01:15:29.745781700Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:15:29.745953 containerd[1713]: time="2026-03-07T01:15:29.745791100Z" level=info msg="Start streaming server" Mar 7 01:15:29.745953 containerd[1713]: time="2026-03-07T01:15:29.745926800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:15:29.746249 containerd[1713]: time="2026-03-07T01:15:29.746013500Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:15:29.746249 containerd[1713]: time="2026-03-07T01:15:29.746126700Z" level=info msg="containerd successfully booted in 0.427321s" Mar 7 01:15:29.746257 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:15:29.751737 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:15:29.756778 systemd[1]: Startup finished in 1.062s (kernel) + 7.928s (initrd) + 10.967s (userspace) = 19.958s. Mar 7 01:15:30.380841 login[1808]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:15:30.383679 login[1809]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 7 01:15:30.393368 systemd-logind[1683]: New session 1 of user core. Mar 7 01:15:30.394904 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:15:30.400427 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:15:30.403888 systemd-logind[1683]: New session 2 of user core. Mar 7 01:15:30.648549 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:15:30.655706 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:15:30.659912 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:15:30.797591 systemd[1829]: Queued start job for default target default.target. Mar 7 01:15:30.808313 systemd[1829]: Created slice app.slice - User Application Slice. Mar 7 01:15:30.808356 systemd[1829]: Reached target paths.target - Paths. Mar 7 01:15:30.808374 systemd[1829]: Reached target timers.target - Timers. Mar 7 01:15:30.809593 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:15:30.834079 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:15:30.834338 systemd[1829]: Reached target sockets.target - Sockets. Mar 7 01:15:30.834360 systemd[1829]: Reached target basic.target - Basic System. Mar 7 01:15:30.834413 systemd[1829]: Reached target default.target - Main User Target. Mar 7 01:15:30.834459 systemd[1829]: Startup finished in 168ms. Mar 7 01:15:30.834952 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:15:30.841309 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:15:30.843785 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:15:33.106260 waagent[1806]: 2026-03-07T01:15:33.106136Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 7 01:15:33.109802 waagent[1806]: 2026-03-07T01:15:33.109734Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 7 01:15:33.112609 waagent[1806]: 2026-03-07T01:15:33.112552Z INFO Daemon Daemon Python: 3.11.9 Mar 7 01:15:33.115247 waagent[1806]: 2026-03-07T01:15:33.115192Z INFO Daemon Daemon Run daemon Mar 7 01:15:33.117560 waagent[1806]: 2026-03-07T01:15:33.117508Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 7 01:15:33.122575 waagent[1806]: 2026-03-07T01:15:33.122523Z INFO Daemon Daemon Using waagent for provisioning Mar 7 01:15:33.125555 waagent[1806]: 2026-03-07T01:15:33.125507Z INFO Daemon Daemon Activate resource disk Mar 7 01:15:33.128177 waagent[1806]: 2026-03-07T01:15:33.128115Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 7 01:15:33.135851 waagent[1806]: 2026-03-07T01:15:33.135794Z INFO Daemon Daemon Found device: None Mar 7 01:15:33.138507 waagent[1806]: 2026-03-07T01:15:33.138455Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 7 01:15:33.142855 waagent[1806]: 2026-03-07T01:15:33.142807Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 7 01:15:33.150291 waagent[1806]: 2026-03-07T01:15:33.150235Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:15:33.153827 waagent[1806]: 2026-03-07T01:15:33.153775Z INFO Daemon Daemon Running default provisioning handler Mar 7 01:15:33.163933 waagent[1806]: 2026-03-07T01:15:33.163710Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 7 01:15:33.171874 waagent[1806]: 2026-03-07T01:15:33.171825Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 7 01:15:33.181932 waagent[1806]: 2026-03-07T01:15:33.173011Z INFO Daemon Daemon cloud-init is enabled: False Mar 7 01:15:33.181932 waagent[1806]: 2026-03-07T01:15:33.173942Z INFO Daemon Daemon Copying ovf-env.xml Mar 7 01:15:34.442664 waagent[1806]: 2026-03-07T01:15:34.442502Z INFO Daemon Daemon Successfully mounted dvd Mar 7 01:15:34.529559 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 7 01:15:34.535134 waagent[1806]: 2026-03-07T01:15:34.531495Z INFO Daemon Daemon Detect protocol endpoint Mar 7 01:15:34.535657 waagent[1806]: 2026-03-07T01:15:34.535589Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 7 01:15:34.551704 waagent[1806]: 2026-03-07T01:15:34.538465Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 7 01:15:34.551704 waagent[1806]: 2026-03-07T01:15:34.539387Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 7 01:15:34.551704 waagent[1806]: 2026-03-07T01:15:34.540029Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 7 01:15:34.551704 waagent[1806]: 2026-03-07T01:15:34.540948Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 7 01:15:34.688100 waagent[1806]: 2026-03-07T01:15:34.688034Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 7 01:15:34.691972 waagent[1806]: 2026-03-07T01:15:34.691932Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 7 01:15:34.694925 waagent[1806]: 2026-03-07T01:15:34.694831Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 7 01:15:35.575476 waagent[1806]: 2026-03-07T01:15:35.575375Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 7 01:15:35.587057 waagent[1806]: 2026-03-07T01:15:35.576931Z INFO Daemon Daemon Forcing an update of the goal state. Mar 7 01:15:35.587057 waagent[1806]: 2026-03-07T01:15:35.581073Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:15:35.597985 waagent[1806]: 2026-03-07T01:15:35.597935Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 7 01:15:35.616715 waagent[1806]: 2026-03-07T01:15:35.599605Z INFO Daemon Mar 7 01:15:35.616715 waagent[1806]: 2026-03-07T01:15:35.600221Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3a60b853-29d8-49df-ad9c-ded25b566fb7 eTag: 10273990629518266809 source: Fabric] Mar 7 01:15:35.616715 waagent[1806]: 2026-03-07T01:15:35.601726Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 7 01:15:35.616715 waagent[1806]: 2026-03-07T01:15:35.603059Z INFO Daemon Mar 7 01:15:35.616715 waagent[1806]: 2026-03-07T01:15:35.604334Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:15:35.619854 waagent[1806]: 2026-03-07T01:15:35.619812Z INFO Daemon Daemon Downloading artifacts profile blob Mar 7 01:15:35.755709 waagent[1806]: 2026-03-07T01:15:35.755623Z INFO Daemon Downloaded certificate {'thumbprint': '69E3BF0B1C2393C47B711ACFADFCF2CC1A8F2359', 'hasPrivateKey': True} Mar 7 01:15:35.761482 waagent[1806]: 2026-03-07T01:15:35.761416Z INFO Daemon Fetch goal state completed Mar 7 01:15:35.808881 waagent[1806]: 2026-03-07T01:15:35.808805Z INFO Daemon Daemon Starting provisioning Mar 7 01:15:35.812109 waagent[1806]: 2026-03-07T01:15:35.812042Z INFO Daemon Daemon Handle ovf-env.xml. Mar 7 01:15:35.817627 waagent[1806]: 2026-03-07T01:15:35.813229Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-baf9cf72b8] Mar 7 01:15:35.923834 waagent[1806]: 2026-03-07T01:15:35.923754Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-baf9cf72b8] Mar 7 01:15:35.929118 waagent[1806]: 2026-03-07T01:15:35.929045Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 7 01:15:35.933779 waagent[1806]: 2026-03-07T01:15:35.933399Z INFO Daemon Daemon Primary interface is [eth0] Mar 7 01:15:35.951023 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:15:35.951035 systemd-networkd[1579]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:15:35.951088 systemd-networkd[1579]: eth0: DHCP lease lost Mar 7 01:15:35.952289 waagent[1806]: 2026-03-07T01:15:35.952216Z INFO Daemon Daemon Create user account if not exists Mar 7 01:15:35.956000 waagent[1806]: 2026-03-07T01:15:35.955945Z INFO Daemon Daemon User core already exists, skip useradd Mar 7 01:15:35.971431 waagent[1806]: 2026-03-07T01:15:35.957028Z INFO Daemon Daemon Configure sudoer Mar 7 01:15:35.971431 waagent[1806]: 2026-03-07T01:15:35.958329Z INFO Daemon Daemon Configure sshd Mar 7 01:15:35.971431 waagent[1806]: 2026-03-07T01:15:35.958879Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 7 01:15:35.971431 waagent[1806]: 2026-03-07T01:15:35.959714Z INFO Daemon Daemon Deploy ssh public key. Mar 7 01:15:35.973174 systemd-networkd[1579]: eth0: DHCPv6 lease lost Mar 7 01:15:36.004199 systemd-networkd[1579]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Mar 7 01:15:38.151198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:15:38.157650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:38.888038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:38.892640 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:39.258367 kubelet[1886]: E0307 01:15:39.258310 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:39.262113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:39.262328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:49.400973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:15:49.406582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:49.512435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:49.526459 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:50.083257 chronyd[1721]: Selected source PHC0 Mar 7 01:15:50.174674 kubelet[1902]: E0307 01:15:50.174584 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:50.177102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:50.177350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:16:00.400969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:16:00.406361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:00.511037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:00.515405 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:16:01.243298 kubelet[1917]: E0307 01:16:01.243222 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:16:01.245934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:16:01.246140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:16:06.017558 waagent[1806]: 2026-03-07T01:16:06.017493Z INFO Daemon Daemon Provisioning complete Mar 7 01:16:06.028827 waagent[1806]: 2026-03-07T01:16:06.028767Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 7 01:16:06.037069 waagent[1806]: 2026-03-07T01:16:06.030067Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 7 01:16:06.037069 waagent[1806]: 2026-03-07T01:16:06.031184Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 7 01:16:06.155510 waagent[1924]: 2026-03-07T01:16:06.155414Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 7 01:16:06.155916 waagent[1924]: 2026-03-07T01:16:06.155570Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 7 01:16:06.155916 waagent[1924]: 2026-03-07T01:16:06.155653Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 7 01:16:06.172854 waagent[1924]: 2026-03-07T01:16:06.172789Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 7 01:16:06.173050 waagent[1924]: 2026-03-07T01:16:06.173006Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:16:06.173145 waagent[1924]: 2026-03-07T01:16:06.173102Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:16:06.179946 waagent[1924]: 2026-03-07T01:16:06.179881Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 7 01:16:06.188684 waagent[1924]: 2026-03-07T01:16:06.188632Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 7 01:16:06.189155 waagent[1924]: 2026-03-07T01:16:06.189085Z INFO ExtHandler Mar 7 01:16:06.189251 waagent[1924]: 2026-03-07T01:16:06.189209Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 509fb3b3-4fb3-4bf8-afab-b6561d56a508 eTag: 10273990629518266809 source: Fabric] Mar 7 01:16:06.189579 waagent[1924]: 2026-03-07T01:16:06.189527Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 7 01:16:06.190160 waagent[1924]: 2026-03-07T01:16:06.190096Z INFO ExtHandler Mar 7 01:16:06.190240 waagent[1924]: 2026-03-07T01:16:06.190201Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 7 01:16:06.193502 waagent[1924]: 2026-03-07T01:16:06.193459Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 7 01:16:06.248875 waagent[1924]: 2026-03-07T01:16:06.248796Z INFO ExtHandler Downloaded certificate {'thumbprint': '69E3BF0B1C2393C47B711ACFADFCF2CC1A8F2359', 'hasPrivateKey': True} Mar 7 01:16:06.249415 waagent[1924]: 2026-03-07T01:16:06.249358Z INFO ExtHandler Fetch goal state completed Mar 7 01:16:06.261261 waagent[1924]: 2026-03-07T01:16:06.261202Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1924 Mar 7 01:16:06.261422 waagent[1924]: 2026-03-07T01:16:06.261375Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 7 01:16:06.263015 waagent[1924]: 2026-03-07T01:16:06.262961Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 7 01:16:06.263408 waagent[1924]: 2026-03-07T01:16:06.263359Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 7 01:16:06.272983 waagent[1924]: 2026-03-07T01:16:06.272905Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 7 01:16:06.273136 waagent[1924]: 2026-03-07T01:16:06.273093Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 7 01:16:06.279911 waagent[1924]: 2026-03-07T01:16:06.279535Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 7 01:16:06.286206 systemd[1]: Reloading requested from client PID 1937 ('systemctl') (unit waagent.service)... Mar 7 01:16:06.286223 systemd[1]: Reloading... Mar 7 01:16:06.378188 zram_generator::config[1974]: No configuration found. Mar 7 01:16:06.490912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:06.572492 systemd[1]: Reloading finished in 285 ms. Mar 7 01:16:06.598170 waagent[1924]: 2026-03-07T01:16:06.596778Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 7 01:16:06.605914 systemd[1]: Reloading requested from client PID 2028 ('systemctl') (unit waagent.service)... Mar 7 01:16:06.605931 systemd[1]: Reloading... Mar 7 01:16:06.692243 zram_generator::config[2062]: No configuration found. Mar 7 01:16:06.808075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:06.890364 systemd[1]: Reloading finished in 284 ms. Mar 7 01:16:06.915333 waagent[1924]: 2026-03-07T01:16:06.914314Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 7 01:16:06.915333 waagent[1924]: 2026-03-07T01:16:06.914502Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 7 01:16:07.026085 waagent[1924]: 2026-03-07T01:16:07.025994Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 7 01:16:07.026694 waagent[1924]: 2026-03-07T01:16:07.026622Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 7 01:16:07.027486 waagent[1924]: 2026-03-07T01:16:07.027408Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 7 01:16:07.027875 waagent[1924]: 2026-03-07T01:16:07.027809Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 7 01:16:07.028078 waagent[1924]: 2026-03-07T01:16:07.028024Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:16:07.028605 waagent[1924]: 2026-03-07T01:16:07.028507Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:16:07.028679 waagent[1924]: 2026-03-07T01:16:07.028628Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 7 01:16:07.028876 waagent[1924]: 2026-03-07T01:16:07.028720Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 7 01:16:07.028876 waagent[1924]: 2026-03-07T01:16:07.028810Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 7 01:16:07.028999 waagent[1924]: 2026-03-07T01:16:07.028918Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 7 01:16:07.029619 waagent[1924]: 2026-03-07T01:16:07.029535Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 7 01:16:07.029911 waagent[1924]: 2026-03-07T01:16:07.029844Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 7 01:16:07.030071 waagent[1924]: 2026-03-07T01:16:07.029959Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 7 01:16:07.030624 waagent[1924]: 2026-03-07T01:16:07.030543Z INFO EnvHandler ExtHandler Configure routes Mar 7 01:16:07.030703 waagent[1924]: 2026-03-07T01:16:07.030624Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 7 01:16:07.030703 waagent[1924]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 7 01:16:07.030703 waagent[1924]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Mar 7 01:16:07.030703 waagent[1924]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 7 01:16:07.030703 waagent[1924]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:16:07.030703 waagent[1924]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:16:07.030703 waagent[1924]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 7 01:16:07.031109 waagent[1924]: 2026-03-07T01:16:07.031028Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 7 01:16:07.033769 waagent[1924]: 2026-03-07T01:16:07.033228Z INFO EnvHandler ExtHandler Gateway:None Mar 7 01:16:07.034355 waagent[1924]: 2026-03-07T01:16:07.034313Z INFO EnvHandler ExtHandler Routes:None Mar 7 01:16:07.039912 waagent[1924]: 2026-03-07T01:16:07.039868Z INFO ExtHandler ExtHandler Mar 7 01:16:07.040000 waagent[1924]: 2026-03-07T01:16:07.039961Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: de2dfe4d-3e53-4f19-9ee1-8d061b554874 correlation 44d237a6-1653-4207-9ec5-23280158c9c8 created: 2026-03-07T01:14:53.754959Z] Mar 7 01:16:07.040685 waagent[1924]: 2026-03-07T01:16:07.040627Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 7 01:16:07.042655 waagent[1924]: 2026-03-07T01:16:07.041498Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 7 01:16:07.051178 waagent[1924]: 2026-03-07T01:16:07.050956Z INFO MonitorHandler ExtHandler Network interfaces: Mar 7 01:16:07.051178 waagent[1924]: Executing ['ip', '-a', '-o', 'link']: Mar 7 01:16:07.051178 waagent[1924]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 7 01:16:07.051178 waagent[1924]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:df:85 brd ff:ff:ff:ff:ff:ff Mar 7 01:16:07.051178 waagent[1924]: 3: enP62581s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:df:85 brd ff:ff:ff:ff:ff:ff\ altname enP62581p0s2 Mar 7 01:16:07.051178 waagent[1924]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 7 01:16:07.051178 waagent[1924]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 7 01:16:07.051178 waagent[1924]: 2: eth0 inet 10.200.8.14/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 7 01:16:07.051178 waagent[1924]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 7 01:16:07.051178 waagent[1924]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 7 01:16:07.051178 waagent[1924]: 2: eth0 inet6 fe80::7e1e:52ff:fe20:df85/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 7 01:16:07.075244 waagent[1924]: 2026-03-07T01:16:07.074378Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1DD37149-1DD7-4FF1-8061-F0EFEC7F5C1B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 7 01:16:07.099800 waagent[1924]: 2026-03-07T01:16:07.099742Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 7 01:16:07.099800 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:16:07.099800 waagent[1924]: pkts bytes target prot opt in out source destination Mar 7 01:16:07.099800 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:16:07.099800 waagent[1924]: pkts bytes target prot opt in out source destination Mar 7 01:16:07.099800 waagent[1924]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:16:07.099800 waagent[1924]: pkts bytes target prot opt in out source destination Mar 7 01:16:07.099800 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:16:07.099800 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:16:07.099800 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:16:07.103347 waagent[1924]: 2026-03-07T01:16:07.103297Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 7 01:16:07.103347 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:16:07.103347 waagent[1924]: pkts bytes target prot opt in out source destination Mar 7 01:16:07.103347 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:16:07.103347 waagent[1924]: pkts bytes target prot opt in out source destination Mar 7 01:16:07.103347 waagent[1924]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 7 01:16:07.103347 waagent[1924]: pkts bytes target prot opt in out source destination Mar 7 01:16:07.103347 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 7 01:16:07.103347 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 7 01:16:07.103347 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 7 01:16:07.103835 waagent[1924]: 2026-03-07T01:16:07.103805Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 7 01:16:09.941602 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:16:09.946474 systemd[1]: Started sshd@0-10.200.8.14:22-10.200.16.10:32846.service - OpenSSH per-connection server daemon (10.200.16.10:32846). Mar 7 01:16:10.581459 sshd[2154]: Accepted publickey for core from 10.200.16.10 port 32846 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:16:10.583350 sshd[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:10.587491 systemd-logind[1683]: New session 3 of user core. Mar 7 01:16:10.594303 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:16:10.621244 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Mar 7 01:16:11.133103 systemd[1]: Started sshd@1-10.200.8.14:22-10.200.16.10:32860.service - OpenSSH per-connection server daemon (10.200.16.10:32860). Mar 7 01:16:11.401040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:16:11.407631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:11.525382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:11.530309 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:16:11.566009 kubelet[2168]: E0307 01:16:11.565960 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:16:11.568427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:16:11.568656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:16:11.760839 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 32860 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:16:11.762598 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:11.767471 systemd-logind[1683]: New session 4 of user core. Mar 7 01:16:11.778500 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:16:11.799240 update_engine[1684]: I20260307 01:16:11.799185 1684 update_attempter.cc:509] Updating boot flags... Mar 7 01:16:12.205696 sshd[2159]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:12.208703 systemd[1]: sshd@1-10.200.8.14:22-10.200.16.10:32860.service: Deactivated successfully. Mar 7 01:16:12.210703 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:16:12.212372 systemd-logind[1683]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:16:12.213307 systemd-logind[1683]: Removed session 4. Mar 7 01:16:12.262176 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2193) Mar 7 01:16:12.347423 systemd[1]: Started sshd@2-10.200.8.14:22-10.200.16.10:32866.service - OpenSSH per-connection server daemon (10.200.16.10:32866). Mar 7 01:16:12.974185 sshd[2221]: Accepted publickey for core from 10.200.16.10 port 32866 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:16:12.975580 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:12.980216 systemd-logind[1683]: New session 5 of user core. Mar 7 01:16:12.986291 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:16:13.413278 sshd[2221]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:13.417573 systemd-logind[1683]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:16:13.418340 systemd[1]: sshd@2-10.200.8.14:22-10.200.16.10:32866.service: Deactivated successfully. Mar 7 01:16:13.420076 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:16:13.421008 systemd-logind[1683]: Removed session 5. Mar 7 01:16:13.524166 systemd[1]: Started sshd@3-10.200.8.14:22-10.200.16.10:32868.service - OpenSSH per-connection server daemon (10.200.16.10:32868). Mar 7 01:16:14.156040 sshd[2228]: Accepted publickey for core from 10.200.16.10 port 32868 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:16:14.157556 sshd[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:14.162213 systemd-logind[1683]: New session 6 of user core. Mar 7 01:16:14.168300 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:16:14.599970 sshd[2228]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:14.604111 systemd[1]: sshd@3-10.200.8.14:22-10.200.16.10:32868.service: Deactivated successfully. Mar 7 01:16:14.606029 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:16:14.606715 systemd-logind[1683]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:16:14.607648 systemd-logind[1683]: Removed session 6. Mar 7 01:16:14.714308 systemd[1]: Started sshd@4-10.200.8.14:22-10.200.16.10:32884.service - OpenSSH per-connection server daemon (10.200.16.10:32884). Mar 7 01:16:15.344017 sshd[2235]: Accepted publickey for core from 10.200.16.10 port 32884 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:16:15.345485 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:15.350548 systemd-logind[1683]: New session 7 of user core. Mar 7 01:16:15.359338 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:16:15.714764 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:16:15.715136 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:15.734658 sudo[2238]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:15.835283 sshd[2235]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:15.838405 systemd[1]: sshd@4-10.200.8.14:22-10.200.16.10:32884.service: Deactivated successfully. Mar 7 01:16:15.840483 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:16:15.841904 systemd-logind[1683]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:16:15.842922 systemd-logind[1683]: Removed session 7. Mar 7 01:16:15.950027 systemd[1]: Started sshd@5-10.200.8.14:22-10.200.16.10:32896.service - OpenSSH per-connection server daemon (10.200.16.10:32896). Mar 7 01:16:16.575866 sshd[2243]: Accepted publickey for core from 10.200.16.10 port 32896 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:16:16.577434 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:16.582419 systemd-logind[1683]: New session 8 of user core. Mar 7 01:16:16.591324 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:16:16.919474 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:16:16.919840 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:16.923129 sudo[2247]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:16.928265 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:16:16.928608 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:16.940448 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:16:16.943332 auditctl[2250]: No rules Mar 7 01:16:16.944442 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:16:16.944684 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:16:16.950875 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:16:16.972949 augenrules[2268]: No rules Mar 7 01:16:16.974555 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:16:16.975779 sudo[2246]: pam_unix(sudo:session): session closed for user root Mar 7 01:16:17.076442 sshd[2243]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:17.079764 systemd[1]: sshd@5-10.200.8.14:22-10.200.16.10:32896.service: Deactivated successfully. Mar 7 01:16:17.081805 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:16:17.083302 systemd-logind[1683]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:16:17.084292 systemd-logind[1683]: Removed session 8. Mar 7 01:16:17.187445 systemd[1]: Started sshd@6-10.200.8.14:22-10.200.16.10:32906.service - OpenSSH per-connection server daemon (10.200.16.10:32906). Mar 7 01:16:17.818970 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 32906 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:16:17.820481 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:17.825487 systemd-logind[1683]: New session 9 of user core. Mar 7 01:16:17.834311 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:16:18.163681 sudo[2279]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:16:18.164049 sudo[2279]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:16:18.700441 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:16:18.701721 (dockerd)[2295]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:16:19.263090 dockerd[2295]: time="2026-03-07T01:16:19.262878984Z" level=info msg="Starting up" Mar 7 01:16:19.507898 dockerd[2295]: time="2026-03-07T01:16:19.507851312Z" level=info msg="Loading containers: start." Mar 7 01:16:19.625170 kernel: Initializing XFRM netlink socket Mar 7 01:16:19.734985 systemd-networkd[1579]: docker0: Link UP Mar 7 01:16:19.758926 dockerd[2295]: time="2026-03-07T01:16:19.758882964Z" level=info msg="Loading containers: done." Mar 7 01:16:19.773134 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2131864054-merged.mount: Deactivated successfully. Mar 7 01:16:19.779604 dockerd[2295]: time="2026-03-07T01:16:19.779569489Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:16:19.779769 dockerd[2295]: time="2026-03-07T01:16:19.779689591Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:16:19.779846 dockerd[2295]: time="2026-03-07T01:16:19.779802993Z" level=info msg="Daemon has completed initialization" Mar 7 01:16:19.846274 dockerd[2295]: time="2026-03-07T01:16:19.846223656Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:16:19.846880 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:16:20.371406 containerd[1713]: time="2026-03-07T01:16:20.371364034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:16:21.217498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068711003.mount: Deactivated successfully. Mar 7 01:16:21.651791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:16:21.659488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:21.797302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:21.803223 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:16:21.842060 kubelet[2472]: E0307 01:16:21.842021 2472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:16:21.844516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:16:21.844801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:16:23.526043 containerd[1713]: time="2026-03-07T01:16:23.525986242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:23.529261 containerd[1713]: time="2026-03-07T01:16:23.529201696Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 7 01:16:23.532713 containerd[1713]: time="2026-03-07T01:16:23.532637453Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:23.537892 containerd[1713]: time="2026-03-07T01:16:23.537559734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:23.539070 containerd[1713]: time="2026-03-07T01:16:23.538577351Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 3.167169116s" Mar 7 01:16:23.539070 containerd[1713]: time="2026-03-07T01:16:23.538618952Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:16:23.539619 containerd[1713]: time="2026-03-07T01:16:23.539596068Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:16:25.279801 containerd[1713]: time="2026-03-07T01:16:25.279739889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:25.283026 containerd[1713]: time="2026-03-07T01:16:25.282829440Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 7 01:16:25.285761 containerd[1713]: time="2026-03-07T01:16:25.285611486Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:25.291275 containerd[1713]: time="2026-03-07T01:16:25.291210279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:25.292453 containerd[1713]: time="2026-03-07T01:16:25.292268196Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.752640828s" Mar 7 01:16:25.292453 containerd[1713]: time="2026-03-07T01:16:25.292309797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:16:25.293189 containerd[1713]: time="2026-03-07T01:16:25.292975308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:16:26.729740 containerd[1713]: time="2026-03-07T01:16:26.729686104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:26.732727 containerd[1713]: time="2026-03-07T01:16:26.732663753Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 7 01:16:26.736621 containerd[1713]: time="2026-03-07T01:16:26.736483516Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:26.741962 containerd[1713]: time="2026-03-07T01:16:26.741911006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:26.743091 containerd[1713]: time="2026-03-07T01:16:26.742952523Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.449942514s" Mar 7 01:16:26.743091 containerd[1713]: time="2026-03-07T01:16:26.742992724Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:16:26.744088 containerd[1713]: time="2026-03-07T01:16:26.744058642Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:16:27.932641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785102568.mount: Deactivated successfully. Mar 7 01:16:28.468195 containerd[1713]: time="2026-03-07T01:16:28.468126497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:28.470678 containerd[1713]: time="2026-03-07T01:16:28.470531437Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 7 01:16:28.475023 containerd[1713]: time="2026-03-07T01:16:28.474770907Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:28.480264 containerd[1713]: time="2026-03-07T01:16:28.480210897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:28.481181 containerd[1713]: time="2026-03-07T01:16:28.480812007Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.736715865s" Mar 7 01:16:28.481181 containerd[1713]: time="2026-03-07T01:16:28.480857908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:16:28.481729 containerd[1713]: time="2026-03-07T01:16:28.481701522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:16:29.160294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306713829.mount: Deactivated successfully. Mar 7 01:16:30.502216 containerd[1713]: time="2026-03-07T01:16:30.502137750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:30.504229 containerd[1713]: time="2026-03-07T01:16:30.504166906Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 7 01:16:30.507717 containerd[1713]: time="2026-03-07T01:16:30.507658902Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:30.512022 containerd[1713]: time="2026-03-07T01:16:30.511965320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:30.513347 containerd[1713]: time="2026-03-07T01:16:30.513164753Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.031306627s" Mar 7 01:16:30.513347 containerd[1713]: time="2026-03-07T01:16:30.513206754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:16:30.513998 containerd[1713]: time="2026-03-07T01:16:30.513846571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:16:31.100803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346244283.mount: Deactivated successfully. Mar 7 01:16:31.117131 containerd[1713]: time="2026-03-07T01:16:31.117088215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:31.119715 containerd[1713]: time="2026-03-07T01:16:31.119645285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 7 01:16:31.122542 containerd[1713]: time="2026-03-07T01:16:31.122472762Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:31.127171 containerd[1713]: time="2026-03-07T01:16:31.126804681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:31.127691 containerd[1713]: time="2026-03-07T01:16:31.127522901Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 613.402021ms" Mar 7 01:16:31.127691 containerd[1713]: time="2026-03-07T01:16:31.127559402Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:16:31.128417 containerd[1713]: time="2026-03-07T01:16:31.128387824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:16:31.706657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619035738.mount: Deactivated successfully. Mar 7 01:16:31.900884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:16:31.907615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:32.085425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:32.096637 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:16:32.750965 kubelet[2596]: E0307 01:16:32.750899 2596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:16:32.753420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:16:32.754136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:16:33.890859 containerd[1713]: time="2026-03-07T01:16:33.890787180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.902784 containerd[1713]: time="2026-03-07T01:16:33.902708507Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 7 01:16:33.914561 containerd[1713]: time="2026-03-07T01:16:33.914505430Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.919637 containerd[1713]: time="2026-03-07T01:16:33.919589569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:33.921748 containerd[1713]: time="2026-03-07T01:16:33.920637398Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.792213873s" Mar 7 01:16:33.921748 containerd[1713]: time="2026-03-07T01:16:33.920676699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:16:37.054839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:37.062721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:37.102489 systemd[1]: Reloading requested from client PID 2682 ('systemctl') (unit session-9.scope)... Mar 7 01:16:37.102507 systemd[1]: Reloading... Mar 7 01:16:37.242222 zram_generator::config[2722]: No configuration found. Mar 7 01:16:37.358168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:37.438562 systemd[1]: Reloading finished in 335 ms. Mar 7 01:16:37.488289 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:16:37.488384 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:16:37.488653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:37.493527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:37.848202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:37.860478 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:16:37.894386 kubelet[2792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:16:37.894386 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:16:37.894386 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:16:38.579975 kubelet[2792]: I0307 01:16:38.579306 2792 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:16:39.504064 kubelet[2792]: I0307 01:16:39.504015 2792 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:16:39.504064 kubelet[2792]: I0307 01:16:39.504046 2792 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:16:39.504564 kubelet[2792]: I0307 01:16:39.504403 2792 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:16:39.538177 kubelet[2792]: E0307 01:16:39.538107 2792 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:16:39.539283 kubelet[2792]: I0307 01:16:39.539252 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:16:39.546253 kubelet[2792]: E0307 01:16:39.546218 2792 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:16:39.546253 kubelet[2792]: I0307 01:16:39.546251 2792 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:16:39.549670 kubelet[2792]: I0307 01:16:39.549637 2792 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:16:39.550532 kubelet[2792]: I0307 01:16:39.550499 2792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:16:39.550733 kubelet[2792]: I0307 01:16:39.550530 2792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-baf9cf72b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:16:39.550872 kubelet[2792]: I0307 01:16:39.550738 2792 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:16:39.550872 kubelet[2792]: I0307 01:16:39.550753 2792 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:16:39.550950 kubelet[2792]: I0307 01:16:39.550892 2792 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:39.555257 kubelet[2792]: I0307 01:16:39.555234 2792 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:16:39.555257 kubelet[2792]: I0307 01:16:39.555260 2792 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:16:39.555387 kubelet[2792]: I0307 01:16:39.555289 2792 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:16:39.557136 kubelet[2792]: I0307 01:16:39.556863 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:16:39.562216 kubelet[2792]: E0307 01:16:39.561959 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-baf9cf72b8&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:16:39.562583 kubelet[2792]: E0307 01:16:39.562559 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:16:39.562736 kubelet[2792]: I0307 01:16:39.562725 2792 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:16:39.563365 kubelet[2792]: I0307 01:16:39.563341 2792 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:16:39.564410 kubelet[2792]: W0307 01:16:39.564383 2792 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:16:39.569569 kubelet[2792]: I0307 01:16:39.569545 2792 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:16:39.569719 kubelet[2792]: I0307 01:16:39.569706 2792 server.go:1289] "Started kubelet" Mar 7 01:16:39.570124 kubelet[2792]: I0307 01:16:39.570039 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:16:39.574259 kubelet[2792]: I0307 01:16:39.574211 2792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:16:39.574810 kubelet[2792]: I0307 01:16:39.574639 2792 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:16:39.576392 kubelet[2792]: I0307 01:16:39.574930 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:16:39.582341 kubelet[2792]: E0307 01:16:39.580760 2792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-baf9cf72b8.189a6a39bebeb11c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-baf9cf72b8,UID:ci-4081.3.6-n-baf9cf72b8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-baf9cf72b8,},FirstTimestamp:2026-03-07 01:16:39.569559836 +0000 UTC m=+1.705207144,LastTimestamp:2026-03-07 01:16:39.569559836 +0000 UTC m=+1.705207144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-baf9cf72b8,}" Mar 7 01:16:39.585509 kubelet[2792]: I0307 01:16:39.583629 2792 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:16:39.585509 kubelet[2792]: I0307 01:16:39.584992 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:16:39.586927 kubelet[2792]: I0307 01:16:39.586887 2792 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:16:39.587115 kubelet[2792]: E0307 01:16:39.587089 2792 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" Mar 7 01:16:39.587227 kubelet[2792]: I0307 01:16:39.587144 2792 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:16:39.587227 kubelet[2792]: I0307 01:16:39.587220 2792 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:16:39.588741 kubelet[2792]: E0307 01:16:39.588639 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-baf9cf72b8?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="200ms" Mar 7 01:16:39.590029 kubelet[2792]: I0307 01:16:39.589544 2792 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:16:39.590029 kubelet[2792]: I0307 01:16:39.589646 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:16:39.591809 kubelet[2792]: E0307 01:16:39.591766 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:16:39.592306 kubelet[2792]: I0307 01:16:39.592290 2792 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:16:39.611212 kubelet[2792]: E0307 01:16:39.608790 2792 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:16:39.620320 kubelet[2792]: I0307 01:16:39.620288 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:16:39.622103 kubelet[2792]: I0307 01:16:39.622077 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:16:39.622197 kubelet[2792]: I0307 01:16:39.622125 2792 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:16:39.622197 kubelet[2792]: I0307 01:16:39.622173 2792 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:16:39.622197 kubelet[2792]: I0307 01:16:39.622184 2792 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:16:39.622340 kubelet[2792]: E0307 01:16:39.622225 2792 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:16:39.625249 kubelet[2792]: E0307 01:16:39.625225 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:16:39.626781 kubelet[2792]: I0307 01:16:39.626761 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:16:39.626781 kubelet[2792]: I0307 01:16:39.626778 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:16:39.626886 kubelet[2792]: I0307 01:16:39.626794 2792 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:39.632336 kubelet[2792]: I0307 01:16:39.632291 2792 policy_none.go:49] "None policy: Start" Mar 7 01:16:39.632553 kubelet[2792]: I0307 01:16:39.632347 2792 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:16:39.632553 kubelet[2792]: I0307 01:16:39.632387 2792 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:16:39.642312 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:16:39.657106 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:16:39.660558 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:16:39.668471 kubelet[2792]: E0307 01:16:39.668444 2792 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:16:39.668668 kubelet[2792]: I0307 01:16:39.668648 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:16:39.668734 kubelet[2792]: I0307 01:16:39.668671 2792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:16:39.669143 kubelet[2792]: I0307 01:16:39.669095 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:16:39.670764 kubelet[2792]: E0307 01:16:39.670737 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:16:39.670839 kubelet[2792]: E0307 01:16:39.670810 2792 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-baf9cf72b8\" not found" Mar 7 01:16:39.736364 systemd[1]: Created slice kubepods-burstable-podcfe75f25367b872254ffd04f43979bc8.slice - libcontainer container kubepods-burstable-podcfe75f25367b872254ffd04f43979bc8.slice. Mar 7 01:16:39.747603 kubelet[2792]: E0307 01:16:39.747404 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.752612 systemd[1]: Created slice kubepods-burstable-pod6763cc85ccbbed1c792b8f2a7cb9f574.slice - libcontainer container kubepods-burstable-pod6763cc85ccbbed1c792b8f2a7cb9f574.slice. Mar 7 01:16:39.758329 kubelet[2792]: E0307 01:16:39.758249 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.761331 systemd[1]: Created slice kubepods-burstable-pod68ef14e0cb2b42d73557138b4c9a485e.slice - libcontainer container kubepods-burstable-pod68ef14e0cb2b42d73557138b4c9a485e.slice. Mar 7 01:16:39.763291 kubelet[2792]: E0307 01:16:39.763270 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.771039 kubelet[2792]: I0307 01:16:39.771020 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.771381 kubelet[2792]: E0307 01:16:39.771358 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.789577 kubelet[2792]: E0307 01:16:39.789536 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-baf9cf72b8?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="400ms" Mar 7 01:16:39.889196 kubelet[2792]: I0307 01:16:39.889000 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889196 kubelet[2792]: I0307 01:16:39.889062 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889196 kubelet[2792]: I0307 01:16:39.889087 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68ef14e0cb2b42d73557138b4c9a485e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-baf9cf72b8\" (UID: \"68ef14e0cb2b42d73557138b4c9a485e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889196 kubelet[2792]: I0307 01:16:39.889109 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfe75f25367b872254ffd04f43979bc8-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" (UID: \"cfe75f25367b872254ffd04f43979bc8\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889196 kubelet[2792]: I0307 01:16:39.889133 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfe75f25367b872254ffd04f43979bc8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" (UID: \"cfe75f25367b872254ffd04f43979bc8\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889509 kubelet[2792]: I0307 01:16:39.889172 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfe75f25367b872254ffd04f43979bc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" (UID: \"cfe75f25367b872254ffd04f43979bc8\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889509 kubelet[2792]: I0307 01:16:39.889193 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889509 kubelet[2792]: I0307 01:16:39.889236 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.889509 kubelet[2792]: I0307 01:16:39.889285 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.973734 kubelet[2792]: I0307 01:16:39.973700 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:39.974046 kubelet[2792]: E0307 01:16:39.974010 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:40.049270 containerd[1713]: time="2026-03-07T01:16:40.049135048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-baf9cf72b8,Uid:cfe75f25367b872254ffd04f43979bc8,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:40.061091 containerd[1713]: time="2026-03-07T01:16:40.061051635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-baf9cf72b8,Uid:6763cc85ccbbed1c792b8f2a7cb9f574,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:40.063995 containerd[1713]: time="2026-03-07T01:16:40.063960080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-baf9cf72b8,Uid:68ef14e0cb2b42d73557138b4c9a485e,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:40.190772 kubelet[2792]: E0307 01:16:40.190728 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-baf9cf72b8?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="800ms" Mar 7 01:16:40.377246 kubelet[2792]: I0307 01:16:40.376879 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:40.377410 kubelet[2792]: E0307 01:16:40.377251 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:40.504284 kubelet[2792]: E0307 01:16:40.504237 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:16:40.528576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422370166.mount: Deactivated successfully. Mar 7 01:16:40.558611 containerd[1713]: time="2026-03-07T01:16:40.558565528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:40.561325 containerd[1713]: time="2026-03-07T01:16:40.561131968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 7 01:16:40.564561 containerd[1713]: time="2026-03-07T01:16:40.564527121Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:40.567969 containerd[1713]: time="2026-03-07T01:16:40.567925874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:40.571164 containerd[1713]: time="2026-03-07T01:16:40.571117824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:16:40.575056 containerd[1713]: time="2026-03-07T01:16:40.575020186Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:40.577962 containerd[1713]: time="2026-03-07T01:16:40.577880730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:16:40.583088 containerd[1713]: time="2026-03-07T01:16:40.582999911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:16:40.584182 containerd[1713]: time="2026-03-07T01:16:40.583773223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.746441ms" Mar 7 01:16:40.584885 containerd[1713]: time="2026-03-07T01:16:40.584848740Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.614189ms" Mar 7 01:16:40.591363 containerd[1713]: time="2026-03-07T01:16:40.591330141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.207905ms" Mar 7 01:16:40.712168 kubelet[2792]: E0307 01:16:40.712095 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:16:40.738197 kubelet[2792]: E0307 01:16:40.738139 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:16:40.885626 containerd[1713]: time="2026-03-07T01:16:40.884220029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:40.885626 containerd[1713]: time="2026-03-07T01:16:40.884289330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:40.885626 containerd[1713]: time="2026-03-07T01:16:40.884322531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:40.887242 containerd[1713]: time="2026-03-07T01:16:40.885977056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:40.892751 containerd[1713]: time="2026-03-07T01:16:40.892374257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:40.892751 containerd[1713]: time="2026-03-07T01:16:40.892432358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:40.892751 containerd[1713]: time="2026-03-07T01:16:40.892492058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:40.892751 containerd[1713]: time="2026-03-07T01:16:40.892639261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:40.905273 containerd[1713]: time="2026-03-07T01:16:40.904710650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:40.905273 containerd[1713]: time="2026-03-07T01:16:40.904803851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:40.905273 containerd[1713]: time="2026-03-07T01:16:40.904828152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:40.905273 containerd[1713]: time="2026-03-07T01:16:40.904924953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:40.924882 systemd[1]: Started cri-containerd-f40251a3a82f541ef37579ea961ce9eda5f09b9cadbb765cd6194e0d7ff53936.scope - libcontainer container f40251a3a82f541ef37579ea961ce9eda5f09b9cadbb765cd6194e0d7ff53936. Mar 7 01:16:40.932333 systemd[1]: Started cri-containerd-b4357a8bdeac277e77b4d6df3d30e95f487e98dc5c58793be277cad94e8e6cff.scope - libcontainer container b4357a8bdeac277e77b4d6df3d30e95f487e98dc5c58793be277cad94e8e6cff. Mar 7 01:16:40.944582 systemd[1]: Started cri-containerd-53749af49e49bb4d0837c2c11f0d538e45c0bef9b16f113fd24b7f9790664943.scope - libcontainer container 53749af49e49bb4d0837c2c11f0d538e45c0bef9b16f113fd24b7f9790664943. Mar 7 01:16:40.991866 kubelet[2792]: E0307 01:16:40.991728 2792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-baf9cf72b8?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="1.6s" Mar 7 01:16:41.023981 containerd[1713]: time="2026-03-07T01:16:41.023913517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-baf9cf72b8,Uid:cfe75f25367b872254ffd04f43979bc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4357a8bdeac277e77b4d6df3d30e95f487e98dc5c58793be277cad94e8e6cff\"" Mar 7 01:16:41.030776 containerd[1713]: time="2026-03-07T01:16:41.030734324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-baf9cf72b8,Uid:6763cc85ccbbed1c792b8f2a7cb9f574,Namespace:kube-system,Attempt:0,} returns sandbox id \"f40251a3a82f541ef37579ea961ce9eda5f09b9cadbb765cd6194e0d7ff53936\"" Mar 7 01:16:41.035541 containerd[1713]: time="2026-03-07T01:16:41.034817388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-baf9cf72b8,Uid:68ef14e0cb2b42d73557138b4c9a485e,Namespace:kube-system,Attempt:0,} returns sandbox id \"53749af49e49bb4d0837c2c11f0d538e45c0bef9b16f113fd24b7f9790664943\"" Mar 7 01:16:41.041348 containerd[1713]: time="2026-03-07T01:16:41.041272889Z" level=info msg="CreateContainer within sandbox \"b4357a8bdeac277e77b4d6df3d30e95f487e98dc5c58793be277cad94e8e6cff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:16:41.047410 containerd[1713]: time="2026-03-07T01:16:41.047378185Z" level=info msg="CreateContainer within sandbox \"53749af49e49bb4d0837c2c11f0d538e45c0bef9b16f113fd24b7f9790664943\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:16:41.052741 containerd[1713]: time="2026-03-07T01:16:41.052720768Z" level=info msg="CreateContainer within sandbox \"f40251a3a82f541ef37579ea961ce9eda5f09b9cadbb765cd6194e0d7ff53936\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:16:41.114438 containerd[1713]: time="2026-03-07T01:16:41.114400034Z" level=info msg="CreateContainer within sandbox \"b4357a8bdeac277e77b4d6df3d30e95f487e98dc5c58793be277cad94e8e6cff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7def2a1235dcef4499932d1cc345ad202cea7485f1061487a2217b7c7e65b51f\"" Mar 7 01:16:41.115105 containerd[1713]: time="2026-03-07T01:16:41.115072445Z" level=info msg="StartContainer for \"7def2a1235dcef4499932d1cc345ad202cea7485f1061487a2217b7c7e65b51f\"" Mar 7 01:16:41.134063 containerd[1713]: time="2026-03-07T01:16:41.133866939Z" level=info msg="CreateContainer within sandbox \"53749af49e49bb4d0837c2c11f0d538e45c0bef9b16f113fd24b7f9790664943\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8b39fb659f7c875c6e52efebe583d0d08a71497b9dc8e79c79a89bdc6d6f1c27\"" Mar 7 01:16:41.135052 containerd[1713]: time="2026-03-07T01:16:41.134419448Z" level=info msg="StartContainer for \"8b39fb659f7c875c6e52efebe583d0d08a71497b9dc8e79c79a89bdc6d6f1c27\"" Mar 7 01:16:41.142454 containerd[1713]: time="2026-03-07T01:16:41.142416173Z" level=info msg="CreateContainer within sandbox \"f40251a3a82f541ef37579ea961ce9eda5f09b9cadbb765cd6194e0d7ff53936\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0182b6af5f2ba75968f592189c69ebaedff45344bff945124635f2dff331e65e\"" Mar 7 01:16:41.142923 containerd[1713]: time="2026-03-07T01:16:41.142900281Z" level=info msg="StartContainer for \"0182b6af5f2ba75968f592189c69ebaedff45344bff945124635f2dff331e65e\"" Mar 7 01:16:41.144641 systemd[1]: Started cri-containerd-7def2a1235dcef4499932d1cc345ad202cea7485f1061487a2217b7c7e65b51f.scope - libcontainer container 7def2a1235dcef4499932d1cc345ad202cea7485f1061487a2217b7c7e65b51f. Mar 7 01:16:41.158915 kubelet[2792]: E0307 01:16:41.158851 2792 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-baf9cf72b8&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:16:41.180750 kubelet[2792]: I0307 01:16:41.180721 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:41.181104 kubelet[2792]: E0307 01:16:41.181066 2792 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:41.185345 systemd[1]: Started cri-containerd-8b39fb659f7c875c6e52efebe583d0d08a71497b9dc8e79c79a89bdc6d6f1c27.scope - libcontainer container 8b39fb659f7c875c6e52efebe583d0d08a71497b9dc8e79c79a89bdc6d6f1c27. Mar 7 01:16:41.205270 systemd[1]: Started cri-containerd-0182b6af5f2ba75968f592189c69ebaedff45344bff945124635f2dff331e65e.scope - libcontainer container 0182b6af5f2ba75968f592189c69ebaedff45344bff945124635f2dff331e65e. Mar 7 01:16:41.226578 containerd[1713]: time="2026-03-07T01:16:41.226537691Z" level=info msg="StartContainer for \"7def2a1235dcef4499932d1cc345ad202cea7485f1061487a2217b7c7e65b51f\" returns successfully" Mar 7 01:16:41.279136 containerd[1713]: time="2026-03-07T01:16:41.279016813Z" level=info msg="StartContainer for \"0182b6af5f2ba75968f592189c69ebaedff45344bff945124635f2dff331e65e\" returns successfully" Mar 7 01:16:41.319170 containerd[1713]: time="2026-03-07T01:16:41.318371729Z" level=info msg="StartContainer for \"8b39fb659f7c875c6e52efebe583d0d08a71497b9dc8e79c79a89bdc6d6f1c27\" returns successfully" Mar 7 01:16:41.645889 kubelet[2792]: E0307 01:16:41.644810 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:41.651435 kubelet[2792]: E0307 01:16:41.651237 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:41.652372 kubelet[2792]: E0307 01:16:41.652353 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:42.657305 kubelet[2792]: E0307 01:16:42.657268 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:42.657742 kubelet[2792]: E0307 01:16:42.657712 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:42.658256 kubelet[2792]: E0307 01:16:42.658229 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:42.784005 kubelet[2792]: I0307 01:16:42.783974 2792 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:43.614387 kubelet[2792]: E0307 01:16:43.614334 2792 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:43.658423 kubelet[2792]: E0307 01:16:43.658235 2792 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:44.887307 kubelet[2792]: I0307 01:16:44.887266 2792 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:44.887307 kubelet[2792]: E0307 01:16:44.887309 2792 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-baf9cf72b8\": node \"ci-4081.3.6-n-baf9cf72b8\" not found" Mar 7 01:16:44.988235 kubelet[2792]: I0307 01:16:44.988186 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:45.002889 kubelet[2792]: I0307 01:16:45.002855 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:45.003066 kubelet[2792]: I0307 01:16:45.003041 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:45.013430 kubelet[2792]: I0307 01:16:45.013384 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:45.013581 kubelet[2792]: I0307 01:16:45.013562 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:45.021608 kubelet[2792]: I0307 01:16:45.021506 2792 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:45.883765 kubelet[2792]: I0307 01:16:45.883724 2792 apiserver.go:52] "Watching apiserver" Mar 7 01:16:45.887887 kubelet[2792]: I0307 01:16:45.887852 2792 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:16:46.041226 systemd[1]: Reloading requested from client PID 3076 ('systemctl') (unit session-9.scope)... Mar 7 01:16:46.041245 systemd[1]: Reloading... Mar 7 01:16:46.164199 zram_generator::config[3125]: No configuration found. Mar 7 01:16:46.273784 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:46.366109 systemd[1]: Reloading finished in 324 ms. Mar 7 01:16:46.404933 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:46.418602 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:16:46.418859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:46.418915 systemd[1]: kubelet.service: Consumed 1.385s CPU time, 131.4M memory peak, 0B memory swap peak. Mar 7 01:16:46.425751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:46.676632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:46.683478 (kubelet)[3183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:16:46.724578 kubelet[3183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:16:46.724578 kubelet[3183]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:16:46.724578 kubelet[3183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:16:46.725572 kubelet[3183]: I0307 01:16:46.724619 3183 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:16:46.730444 kubelet[3183]: I0307 01:16:46.730408 3183 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:16:46.730444 kubelet[3183]: I0307 01:16:46.730432 3183 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:16:46.730693 kubelet[3183]: I0307 01:16:46.730650 3183 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:16:46.731729 kubelet[3183]: I0307 01:16:46.731703 3183 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:16:46.734221 kubelet[3183]: I0307 01:16:46.733627 3183 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:16:46.739573 kubelet[3183]: E0307 01:16:46.739536 3183 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:16:46.739573 kubelet[3183]: I0307 01:16:46.739564 3183 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:16:46.742730 kubelet[3183]: I0307 01:16:46.742694 3183 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:16:46.742958 kubelet[3183]: I0307 01:16:46.742931 3183 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:16:46.743134 kubelet[3183]: I0307 01:16:46.742957 3183 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-baf9cf72b8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:16:46.743289 kubelet[3183]: I0307 01:16:46.743141 3183 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:16:46.743289 kubelet[3183]: I0307 01:16:46.743168 3183 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:16:46.743289 kubelet[3183]: I0307 01:16:46.743223 3183 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:46.743411 kubelet[3183]: I0307 01:16:46.743396 3183 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:16:46.743411 kubelet[3183]: I0307 01:16:46.743410 3183 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:16:46.743488 kubelet[3183]: I0307 01:16:46.743440 3183 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:16:46.743488 kubelet[3183]: I0307 01:16:46.743457 3183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:16:46.748041 kubelet[3183]: I0307 01:16:46.747302 3183 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:16:46.748041 kubelet[3183]: I0307 01:16:46.747913 3183 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:16:46.753976 kubelet[3183]: I0307 01:16:46.753724 3183 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:16:46.754216 kubelet[3183]: I0307 01:16:46.754203 3183 server.go:1289] "Started kubelet" Mar 7 01:16:46.754992 kubelet[3183]: I0307 01:16:46.754772 3183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:16:46.755067 kubelet[3183]: I0307 01:16:46.755053 3183 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:16:46.756283 kubelet[3183]: I0307 01:16:46.755110 3183 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:16:46.756283 kubelet[3183]: I0307 01:16:46.756077 3183 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:16:46.758693 kubelet[3183]: I0307 01:16:46.757538 3183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:16:46.767279 kubelet[3183]: I0307 01:16:46.767258 3183 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:16:46.769111 kubelet[3183]: I0307 01:16:46.769081 3183 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:16:46.769401 kubelet[3183]: E0307 01:16:46.769381 3183 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-baf9cf72b8\" not found" Mar 7 01:16:46.770061 kubelet[3183]: I0307 01:16:46.770035 3183 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:16:46.770213 kubelet[3183]: I0307 01:16:46.770199 3183 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:16:46.778524 kubelet[3183]: I0307 01:16:46.778308 3183 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:16:46.778524 kubelet[3183]: I0307 01:16:46.778528 3183 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:16:46.779527 kubelet[3183]: I0307 01:16:46.778782 3183 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:16:46.783458 kubelet[3183]: E0307 01:16:46.781806 3183 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:16:46.793067 kubelet[3183]: I0307 01:16:46.793040 3183 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:16:46.794478 kubelet[3183]: I0307 01:16:46.794459 3183 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:16:46.794597 kubelet[3183]: I0307 01:16:46.794584 3183 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:16:46.794684 kubelet[3183]: I0307 01:16:46.794673 3183 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:16:46.794746 kubelet[3183]: I0307 01:16:46.794738 3183 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:16:46.794854 kubelet[3183]: E0307 01:16:46.794836 3183 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:16:46.825829 kubelet[3183]: I0307 01:16:46.825803 3183 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:16:46.825829 kubelet[3183]: I0307 01:16:46.825820 3183 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:16:46.826022 kubelet[3183]: I0307 01:16:46.825844 3183 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:46.826022 kubelet[3183]: I0307 01:16:46.826001 3183 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:16:46.826097 kubelet[3183]: I0307 01:16:46.826014 3183 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:16:46.826097 kubelet[3183]: I0307 01:16:46.826033 3183 policy_none.go:49] "None policy: Start" Mar 7 01:16:46.826097 kubelet[3183]: I0307 01:16:46.826046 3183 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:16:46.826097 kubelet[3183]: I0307 01:16:46.826059 3183 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:16:46.826360 kubelet[3183]: I0307 01:16:46.826189 3183 state_mem.go:75] "Updated machine memory state" Mar 7 01:16:46.829823 kubelet[3183]: E0307 01:16:46.829794 3183 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:16:46.829984 kubelet[3183]: I0307 01:16:46.829965 3183 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:16:46.830060 kubelet[3183]: I0307 01:16:46.829984 3183 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:16:46.830688 kubelet[3183]: I0307 01:16:46.830597 3183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:16:46.831731 kubelet[3183]: E0307 01:16:46.831610 3183 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:16:46.895825 kubelet[3183]: I0307 01:16:46.895684 3183 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.898173 kubelet[3183]: I0307 01:16:46.896187 3183 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.898173 kubelet[3183]: I0307 01:16:46.895684 3183 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.916905 kubelet[3183]: I0307 01:16:46.916868 3183 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:46.917049 kubelet[3183]: E0307 01:16:46.916939 3183 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-baf9cf72b8\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.917814 kubelet[3183]: I0307 01:16:46.917786 3183 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:46.917914 kubelet[3183]: E0307 01:16:46.917845 3183 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.917914 kubelet[3183]: I0307 01:16:46.917786 3183 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:46.918012 kubelet[3183]: E0307 01:16:46.917914 3183 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.933399 kubelet[3183]: I0307 01:16:46.933311 3183 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.946212 kubelet[3183]: I0307 01:16:46.945956 3183 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:46.946212 kubelet[3183]: I0307 01:16:46.946029 3183 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.071815 kubelet[3183]: I0307 01:16:47.071776 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.071953 kubelet[3183]: I0307 01:16:47.071828 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.071953 kubelet[3183]: I0307 01:16:47.071854 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.071953 kubelet[3183]: I0307 01:16:47.071876 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68ef14e0cb2b42d73557138b4c9a485e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-baf9cf72b8\" (UID: \"68ef14e0cb2b42d73557138b4c9a485e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.071953 kubelet[3183]: I0307 01:16:47.071894 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfe75f25367b872254ffd04f43979bc8-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" (UID: \"cfe75f25367b872254ffd04f43979bc8\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.071953 kubelet[3183]: I0307 01:16:47.071914 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfe75f25367b872254ffd04f43979bc8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" (UID: \"cfe75f25367b872254ffd04f43979bc8\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.072192 kubelet[3183]: I0307 01:16:47.071939 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfe75f25367b872254ffd04f43979bc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" (UID: \"cfe75f25367b872254ffd04f43979bc8\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.072192 kubelet[3183]: I0307 01:16:47.071971 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.072192 kubelet[3183]: I0307 01:16:47.071992 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6763cc85ccbbed1c792b8f2a7cb9f574-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" (UID: \"6763cc85ccbbed1c792b8f2a7cb9f574\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.750248 kubelet[3183]: I0307 01:16:47.749593 3183 apiserver.go:52] "Watching apiserver" Mar 7 01:16:47.770802 kubelet[3183]: I0307 01:16:47.770766 3183 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:16:47.814854 kubelet[3183]: I0307 01:16:47.814816 3183 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.816276 kubelet[3183]: I0307 01:16:47.816248 3183 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.832723 kubelet[3183]: I0307 01:16:47.832692 3183 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:47.832837 kubelet[3183]: E0307 01:16:47.832753 3183 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-baf9cf72b8\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.835171 kubelet[3183]: I0307 01:16:47.832975 3183 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 7 01:16:47.835171 kubelet[3183]: E0307 01:16:47.833032 3183 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-baf9cf72b8\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" Mar 7 01:16:47.876172 kubelet[3183]: I0307 01:16:47.875011 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-baf9cf72b8" podStartSLOduration=3.874893127 podStartE2EDuration="3.874893127s" podCreationTimestamp="2026-03-07 01:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:47.859170779 +0000 UTC m=+1.170988378" watchObservedRunningTime="2026-03-07 01:16:47.874893127 +0000 UTC m=+1.186710626" Mar 7 01:16:47.893198 kubelet[3183]: I0307 01:16:47.893116 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-baf9cf72b8" podStartSLOduration=2.893102215 podStartE2EDuration="2.893102215s" podCreationTimestamp="2026-03-07 01:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:47.875540237 +0000 UTC m=+1.187357836" watchObservedRunningTime="2026-03-07 01:16:47.893102215 +0000 UTC m=+1.204919814" Mar 7 01:16:52.133193 kubelet[3183]: I0307 01:16:52.133044 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-baf9cf72b8" podStartSLOduration=7.133023217 podStartE2EDuration="7.133023217s" podCreationTimestamp="2026-03-07 01:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:47.893885027 +0000 UTC m=+1.205702626" watchObservedRunningTime="2026-03-07 01:16:52.133023217 +0000 UTC m=+5.444840716" Mar 7 01:16:52.636874 kubelet[3183]: I0307 01:16:52.636840 3183 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:16:52.637289 containerd[1713]: time="2026-03-07T01:16:52.637249373Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:16:52.637814 kubelet[3183]: I0307 01:16:52.637791 3183 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:16:53.715450 kubelet[3183]: I0307 01:16:53.715030 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/297989ff-6a65-47d3-b77c-f3cadd3611b2-xtables-lock\") pod \"kube-proxy-6z4sj\" (UID: \"297989ff-6a65-47d3-b77c-f3cadd3611b2\") " pod="kube-system/kube-proxy-6z4sj" Mar 7 01:16:53.715450 kubelet[3183]: I0307 01:16:53.715073 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/297989ff-6a65-47d3-b77c-f3cadd3611b2-kube-proxy\") pod \"kube-proxy-6z4sj\" (UID: \"297989ff-6a65-47d3-b77c-f3cadd3611b2\") " pod="kube-system/kube-proxy-6z4sj" Mar 7 01:16:53.715450 kubelet[3183]: I0307 01:16:53.715095 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/297989ff-6a65-47d3-b77c-f3cadd3611b2-lib-modules\") pod \"kube-proxy-6z4sj\" (UID: \"297989ff-6a65-47d3-b77c-f3cadd3611b2\") " pod="kube-system/kube-proxy-6z4sj" Mar 7 01:16:53.715450 kubelet[3183]: I0307 01:16:53.715117 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9ww\" (UniqueName: \"kubernetes.io/projected/297989ff-6a65-47d3-b77c-f3cadd3611b2-kube-api-access-8g9ww\") pod \"kube-proxy-6z4sj\" (UID: \"297989ff-6a65-47d3-b77c-f3cadd3611b2\") " pod="kube-system/kube-proxy-6z4sj" Mar 7 01:16:53.716538 systemd[1]: Created slice kubepods-besteffort-pod297989ff_6a65_47d3_b77c_f3cadd3611b2.slice - libcontainer container kubepods-besteffort-pod297989ff_6a65_47d3_b77c_f3cadd3611b2.slice. Mar 7 01:16:53.790579 systemd[1]: Created slice kubepods-besteffort-pod63d31e53_8dcc_456c_b005_213b3c302220.slice - libcontainer container kubepods-besteffort-pod63d31e53_8dcc_456c_b005_213b3c302220.slice. Mar 7 01:16:53.815855 kubelet[3183]: I0307 01:16:53.815779 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63d31e53-8dcc-456c-b005-213b3c302220-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-qwh75\" (UID: \"63d31e53-8dcc-456c-b005-213b3c302220\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qwh75" Mar 7 01:16:53.817193 kubelet[3183]: I0307 01:16:53.816069 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhht2\" (UniqueName: \"kubernetes.io/projected/63d31e53-8dcc-456c-b005-213b3c302220-kube-api-access-xhht2\") pod \"tigera-operator-6bf85f8dd-qwh75\" (UID: \"63d31e53-8dcc-456c-b005-213b3c302220\") " pod="tigera-operator/tigera-operator-6bf85f8dd-qwh75" Mar 7 01:16:54.025939 containerd[1713]: time="2026-03-07T01:16:54.025310776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6z4sj,Uid:297989ff-6a65-47d3-b77c-f3cadd3611b2,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:54.073060 containerd[1713]: time="2026-03-07T01:16:54.071397103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:54.073060 containerd[1713]: time="2026-03-07T01:16:54.071502805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:54.073060 containerd[1713]: time="2026-03-07T01:16:54.071544705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.073060 containerd[1713]: time="2026-03-07T01:16:54.071670807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.095739 containerd[1713]: time="2026-03-07T01:16:54.095688286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qwh75,Uid:63d31e53-8dcc-456c-b005-213b3c302220,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:16:54.099328 systemd[1]: Started cri-containerd-d5cdd811ca549aabf6817ad79c1330f5e850d82ddb3ff9e4c977bac668aacc58.scope - libcontainer container d5cdd811ca549aabf6817ad79c1330f5e850d82ddb3ff9e4c977bac668aacc58. Mar 7 01:16:54.120830 containerd[1713]: time="2026-03-07T01:16:54.120725781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6z4sj,Uid:297989ff-6a65-47d3-b77c-f3cadd3611b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5cdd811ca549aabf6817ad79c1330f5e850d82ddb3ff9e4c977bac668aacc58\"" Mar 7 01:16:54.130181 containerd[1713]: time="2026-03-07T01:16:54.130114130Z" level=info msg="CreateContainer within sandbox \"d5cdd811ca549aabf6817ad79c1330f5e850d82ddb3ff9e4c977bac668aacc58\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:16:54.146970 containerd[1713]: time="2026-03-07T01:16:54.146767792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:54.146970 containerd[1713]: time="2026-03-07T01:16:54.146805993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:54.146970 containerd[1713]: time="2026-03-07T01:16:54.146820993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.146970 containerd[1713]: time="2026-03-07T01:16:54.146878894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:54.164332 systemd[1]: Started cri-containerd-1b92204d229d1f051c8c9584e8b9a13cba894f67022e596ca46ddfcac3582edb.scope - libcontainer container 1b92204d229d1f051c8c9584e8b9a13cba894f67022e596ca46ddfcac3582edb. Mar 7 01:16:54.172019 containerd[1713]: time="2026-03-07T01:16:54.171900589Z" level=info msg="CreateContainer within sandbox \"d5cdd811ca549aabf6817ad79c1330f5e850d82ddb3ff9e4c977bac668aacc58\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ad5ffb079cf523bce6168c67fee1a53b9e92459e6437342846f6e290642de15\"" Mar 7 01:16:54.175947 containerd[1713]: time="2026-03-07T01:16:54.175201341Z" level=info msg="StartContainer for \"4ad5ffb079cf523bce6168c67fee1a53b9e92459e6437342846f6e290642de15\"" Mar 7 01:16:54.224423 systemd[1]: Started cri-containerd-4ad5ffb079cf523bce6168c67fee1a53b9e92459e6437342846f6e290642de15.scope - libcontainer container 4ad5ffb079cf523bce6168c67fee1a53b9e92459e6437342846f6e290642de15. Mar 7 01:16:54.243129 containerd[1713]: time="2026-03-07T01:16:54.242996609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-qwh75,Uid:63d31e53-8dcc-456c-b005-213b3c302220,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b92204d229d1f051c8c9584e8b9a13cba894f67022e596ca46ddfcac3582edb\"" Mar 7 01:16:54.246427 containerd[1713]: time="2026-03-07T01:16:54.246274261Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:16:54.269170 containerd[1713]: time="2026-03-07T01:16:54.269125320Z" level=info msg="StartContainer for \"4ad5ffb079cf523bce6168c67fee1a53b9e92459e6437342846f6e290642de15\" returns successfully" Mar 7 01:16:55.262378 kubelet[3183]: I0307 01:16:55.262083 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6z4sj" podStartSLOduration=2.262064258 podStartE2EDuration="2.262064258s" podCreationTimestamp="2026-03-07 01:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:54.850456876 +0000 UTC m=+8.162274375" watchObservedRunningTime="2026-03-07 01:16:55.262064258 +0000 UTC m=+8.573881757" Mar 7 01:16:55.693205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363350070.mount: Deactivated successfully. Mar 7 01:16:56.985682 containerd[1713]: time="2026-03-07T01:16:56.985631202Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:56.988970 containerd[1713]: time="2026-03-07T01:16:56.988907354Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:16:56.992110 containerd[1713]: time="2026-03-07T01:16:56.992053703Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:56.998411 containerd[1713]: time="2026-03-07T01:16:56.998096799Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:16:57.000141 containerd[1713]: time="2026-03-07T01:16:56.999822626Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.753497165s" Mar 7 01:16:57.000141 containerd[1713]: time="2026-03-07T01:16:56.999857626Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:16:57.007496 containerd[1713]: time="2026-03-07T01:16:57.007468646Z" level=info msg="CreateContainer within sandbox \"1b92204d229d1f051c8c9584e8b9a13cba894f67022e596ca46ddfcac3582edb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:16:57.042328 containerd[1713]: time="2026-03-07T01:16:57.042237294Z" level=info msg="CreateContainer within sandbox \"1b92204d229d1f051c8c9584e8b9a13cba894f67022e596ca46ddfcac3582edb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c6d15a2a4b4a00b3fed5d1224ee15ce4ddf6223cd562f7c1302a4c0cb9cda054\"" Mar 7 01:16:57.042899 containerd[1713]: time="2026-03-07T01:16:57.042834303Z" level=info msg="StartContainer for \"c6d15a2a4b4a00b3fed5d1224ee15ce4ddf6223cd562f7c1302a4c0cb9cda054\"" Mar 7 01:16:57.075313 systemd[1]: Started cri-containerd-c6d15a2a4b4a00b3fed5d1224ee15ce4ddf6223cd562f7c1302a4c0cb9cda054.scope - libcontainer container c6d15a2a4b4a00b3fed5d1224ee15ce4ddf6223cd562f7c1302a4c0cb9cda054. Mar 7 01:16:57.103434 containerd[1713]: time="2026-03-07T01:16:57.103387157Z" level=info msg="StartContainer for \"c6d15a2a4b4a00b3fed5d1224ee15ce4ddf6223cd562f7c1302a4c0cb9cda054\" returns successfully" Mar 7 01:17:03.509993 sudo[2279]: pam_unix(sudo:session): session closed for user root Mar 7 01:17:03.624720 sshd[2276]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:03.631872 systemd[1]: sshd@6-10.200.8.14:22-10.200.16.10:32906.service: Deactivated successfully. Mar 7 01:17:03.636745 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:17:03.637104 systemd[1]: session-9.scope: Consumed 5.341s CPU time, 158.4M memory peak, 0B memory swap peak. Mar 7 01:17:03.640345 systemd-logind[1683]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:17:03.643423 systemd-logind[1683]: Removed session 9. Mar 7 01:17:07.094933 kubelet[3183]: I0307 01:17:07.094099 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-qwh75" podStartSLOduration=11.337924749999999 podStartE2EDuration="14.094076056s" podCreationTimestamp="2026-03-07 01:16:53 +0000 UTC" firstStartedPulling="2026-03-07 01:16:54.244638735 +0000 UTC m=+7.556456334" lastFinishedPulling="2026-03-07 01:16:57.000790141 +0000 UTC m=+10.312607640" observedRunningTime="2026-03-07 01:16:57.851490138 +0000 UTC m=+11.163307637" watchObservedRunningTime="2026-03-07 01:17:07.094076056 +0000 UTC m=+20.405893655" Mar 7 01:17:07.108453 kubelet[3183]: I0307 01:17:07.108378 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2z5m\" (UniqueName: \"kubernetes.io/projected/baaac6d9-8daf-47ff-8839-badf1443d736-kube-api-access-v2z5m\") pod \"calico-typha-5848c864d8-pkblm\" (UID: \"baaac6d9-8daf-47ff-8839-badf1443d736\") " pod="calico-system/calico-typha-5848c864d8-pkblm" Mar 7 01:17:07.108453 kubelet[3183]: I0307 01:17:07.108423 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baaac6d9-8daf-47ff-8839-badf1443d736-tigera-ca-bundle\") pod \"calico-typha-5848c864d8-pkblm\" (UID: \"baaac6d9-8daf-47ff-8839-badf1443d736\") " pod="calico-system/calico-typha-5848c864d8-pkblm" Mar 7 01:17:07.108453 kubelet[3183]: I0307 01:17:07.108450 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/baaac6d9-8daf-47ff-8839-badf1443d736-typha-certs\") pod \"calico-typha-5848c864d8-pkblm\" (UID: \"baaac6d9-8daf-47ff-8839-badf1443d736\") " pod="calico-system/calico-typha-5848c864d8-pkblm" Mar 7 01:17:07.117199 systemd[1]: Created slice kubepods-besteffort-podbaaac6d9_8daf_47ff_8839_badf1443d736.slice - libcontainer container kubepods-besteffort-podbaaac6d9_8daf_47ff_8839_badf1443d736.slice. Mar 7 01:17:07.246344 systemd[1]: Created slice kubepods-besteffort-pod970456b1_6e96_49c4_a729_644ccc2dd819.slice - libcontainer container kubepods-besteffort-pod970456b1_6e96_49c4_a729_644ccc2dd819.slice. Mar 7 01:17:07.309038 kubelet[3183]: I0307 01:17:07.308996 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-lib-modules\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309038 kubelet[3183]: I0307 01:17:07.309040 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-sys-fs\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309038 kubelet[3183]: I0307 01:17:07.309063 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-bpffs\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309038 kubelet[3183]: I0307 01:17:07.309083 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/970456b1-6e96-49c4-a729-644ccc2dd819-node-certs\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309433 kubelet[3183]: I0307 01:17:07.309102 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-policysync\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309433 kubelet[3183]: I0307 01:17:07.309122 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-cni-net-dir\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309433 kubelet[3183]: I0307 01:17:07.309140 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-var-lib-calico\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309433 kubelet[3183]: I0307 01:17:07.309194 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-xtables-lock\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309433 kubelet[3183]: I0307 01:17:07.309214 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-nodeproc\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309715 kubelet[3183]: I0307 01:17:07.309237 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-var-run-calico\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309715 kubelet[3183]: I0307 01:17:07.309257 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-cni-log-dir\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309715 kubelet[3183]: I0307 01:17:07.309278 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/970456b1-6e96-49c4-a729-644ccc2dd819-tigera-ca-bundle\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309715 kubelet[3183]: I0307 01:17:07.309308 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-cni-bin-dir\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309715 kubelet[3183]: I0307 01:17:07.309330 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/970456b1-6e96-49c4-a729-644ccc2dd819-flexvol-driver-host\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.309921 kubelet[3183]: I0307 01:17:07.309355 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78p2p\" (UniqueName: \"kubernetes.io/projected/970456b1-6e96-49c4-a729-644ccc2dd819-kube-api-access-78p2p\") pod \"calico-node-rh4vc\" (UID: \"970456b1-6e96-49c4-a729-644ccc2dd819\") " pod="calico-system/calico-node-rh4vc" Mar 7 01:17:07.319242 kubelet[3183]: E0307 01:17:07.318865 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:07.411029 kubelet[3183]: I0307 01:17:07.410530 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89thc\" (UniqueName: \"kubernetes.io/projected/422fccbd-8b44-46d1-b23a-b122dabbbb7c-kube-api-access-89thc\") pod \"csi-node-driver-ms76b\" (UID: \"422fccbd-8b44-46d1-b23a-b122dabbbb7c\") " pod="calico-system/csi-node-driver-ms76b" Mar 7 01:17:07.411029 kubelet[3183]: I0307 01:17:07.410657 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/422fccbd-8b44-46d1-b23a-b122dabbbb7c-socket-dir\") pod \"csi-node-driver-ms76b\" (UID: \"422fccbd-8b44-46d1-b23a-b122dabbbb7c\") " pod="calico-system/csi-node-driver-ms76b" Mar 7 01:17:07.411029 kubelet[3183]: I0307 01:17:07.410746 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/422fccbd-8b44-46d1-b23a-b122dabbbb7c-kubelet-dir\") pod \"csi-node-driver-ms76b\" (UID: \"422fccbd-8b44-46d1-b23a-b122dabbbb7c\") " pod="calico-system/csi-node-driver-ms76b" Mar 7 01:17:07.411029 kubelet[3183]: I0307 01:17:07.410766 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/422fccbd-8b44-46d1-b23a-b122dabbbb7c-varrun\") pod \"csi-node-driver-ms76b\" (UID: \"422fccbd-8b44-46d1-b23a-b122dabbbb7c\") " pod="calico-system/csi-node-driver-ms76b" Mar 7 01:17:07.411029 kubelet[3183]: I0307 01:17:07.410806 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/422fccbd-8b44-46d1-b23a-b122dabbbb7c-registration-dir\") pod \"csi-node-driver-ms76b\" (UID: \"422fccbd-8b44-46d1-b23a-b122dabbbb7c\") " pod="calico-system/csi-node-driver-ms76b" Mar 7 01:17:07.419820 kubelet[3183]: E0307 01:17:07.419746 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.419820 kubelet[3183]: W0307 01:17:07.419777 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.419820 kubelet[3183]: E0307 01:17:07.419801 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.429747 containerd[1713]: time="2026-03-07T01:17:07.429606514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5848c864d8-pkblm,Uid:baaac6d9-8daf-47ff-8839-badf1443d736,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:07.435267 kubelet[3183]: E0307 01:17:07.435238 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.435437 kubelet[3183]: W0307 01:17:07.435420 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.435572 kubelet[3183]: E0307 01:17:07.435559 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.477288 containerd[1713]: time="2026-03-07T01:17:07.477091944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:07.477469 containerd[1713]: time="2026-03-07T01:17:07.477354648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:07.477469 containerd[1713]: time="2026-03-07T01:17:07.477401849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:07.477654 containerd[1713]: time="2026-03-07T01:17:07.477580852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:07.500330 systemd[1]: Started cri-containerd-227a99e6aeb5c3b239e0d35a0bfb73ba3539efd2d8b81861e72da9f4a1c85c3f.scope - libcontainer container 227a99e6aeb5c3b239e0d35a0bfb73ba3539efd2d8b81861e72da9f4a1c85c3f. Mar 7 01:17:07.513275 kubelet[3183]: E0307 01:17:07.513236 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.513275 kubelet[3183]: W0307 01:17:07.513272 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.513457 kubelet[3183]: E0307 01:17:07.513295 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.513620 kubelet[3183]: E0307 01:17:07.513604 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.513620 kubelet[3183]: W0307 01:17:07.513620 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.513740 kubelet[3183]: E0307 01:17:07.513648 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.514169 kubelet[3183]: E0307 01:17:07.513997 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.514169 kubelet[3183]: W0307 01:17:07.514012 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.514169 kubelet[3183]: E0307 01:17:07.514026 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.514726 kubelet[3183]: E0307 01:17:07.514509 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.514726 kubelet[3183]: W0307 01:17:07.514533 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.514726 kubelet[3183]: E0307 01:17:07.514548 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.515366 kubelet[3183]: E0307 01:17:07.515044 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.515366 kubelet[3183]: W0307 01:17:07.515058 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.515366 kubelet[3183]: E0307 01:17:07.515081 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.515866 kubelet[3183]: E0307 01:17:07.515604 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.515866 kubelet[3183]: W0307 01:17:07.515634 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.515866 kubelet[3183]: E0307 01:17:07.515650 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.516363 kubelet[3183]: E0307 01:17:07.516171 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.516363 kubelet[3183]: W0307 01:17:07.516185 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.516363 kubelet[3183]: E0307 01:17:07.516199 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.516844 kubelet[3183]: E0307 01:17:07.516698 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.516844 kubelet[3183]: W0307 01:17:07.516711 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.516844 kubelet[3183]: E0307 01:17:07.516725 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.517733 kubelet[3183]: E0307 01:17:07.517718 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.518012 kubelet[3183]: W0307 01:17:07.517914 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.518012 kubelet[3183]: E0307 01:17:07.517938 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.518731 kubelet[3183]: E0307 01:17:07.518710 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.518731 kubelet[3183]: W0307 01:17:07.518730 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.518918 kubelet[3183]: E0307 01:17:07.518766 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.519316 kubelet[3183]: E0307 01:17:07.519297 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.519416 kubelet[3183]: W0307 01:17:07.519318 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.519494 kubelet[3183]: E0307 01:17:07.519425 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.520432 kubelet[3183]: E0307 01:17:07.520386 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.520432 kubelet[3183]: W0307 01:17:07.520403 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.520432 kubelet[3183]: E0307 01:17:07.520424 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.521240 kubelet[3183]: E0307 01:17:07.521223 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.521240 kubelet[3183]: W0307 01:17:07.521239 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.521364 kubelet[3183]: E0307 01:17:07.521259 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.521855 kubelet[3183]: E0307 01:17:07.521834 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.521855 kubelet[3183]: W0307 01:17:07.521854 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.521980 kubelet[3183]: E0307 01:17:07.521868 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.523170 kubelet[3183]: E0307 01:17:07.522168 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.523170 kubelet[3183]: W0307 01:17:07.522182 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.523170 kubelet[3183]: E0307 01:17:07.522196 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.523170 kubelet[3183]: E0307 01:17:07.522463 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.523170 kubelet[3183]: W0307 01:17:07.522473 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.523170 kubelet[3183]: E0307 01:17:07.522485 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.523170 kubelet[3183]: E0307 01:17:07.522689 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.523170 kubelet[3183]: W0307 01:17:07.522698 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.523170 kubelet[3183]: E0307 01:17:07.522709 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.523170 kubelet[3183]: E0307 01:17:07.522983 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.523638 kubelet[3183]: W0307 01:17:07.522992 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.523638 kubelet[3183]: E0307 01:17:07.523004 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.523638 kubelet[3183]: E0307 01:17:07.523253 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.523638 kubelet[3183]: W0307 01:17:07.523263 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.523638 kubelet[3183]: E0307 01:17:07.523277 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.523638 kubelet[3183]: E0307 01:17:07.523556 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.523638 kubelet[3183]: W0307 01:17:07.523567 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.523638 kubelet[3183]: E0307 01:17:07.523579 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.524019 kubelet[3183]: E0307 01:17:07.523808 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.524019 kubelet[3183]: W0307 01:17:07.523818 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.524019 kubelet[3183]: E0307 01:17:07.523829 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.524139 kubelet[3183]: E0307 01:17:07.524067 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.524139 kubelet[3183]: W0307 01:17:07.524077 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.524139 kubelet[3183]: E0307 01:17:07.524089 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.524359 kubelet[3183]: E0307 01:17:07.524340 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.524429 kubelet[3183]: W0307 01:17:07.524359 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.524429 kubelet[3183]: E0307 01:17:07.524373 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.525400 kubelet[3183]: E0307 01:17:07.525370 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.525400 kubelet[3183]: W0307 01:17:07.525391 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.525522 kubelet[3183]: E0307 01:17:07.525406 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.526418 kubelet[3183]: E0307 01:17:07.526396 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.526418 kubelet[3183]: W0307 01:17:07.526417 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.526530 kubelet[3183]: E0307 01:17:07.526431 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.536515 kubelet[3183]: E0307 01:17:07.536462 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:07.536515 kubelet[3183]: W0307 01:17:07.536477 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:07.536515 kubelet[3183]: E0307 01:17:07.536490 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:07.549955 containerd[1713]: time="2026-03-07T01:17:07.549528358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rh4vc,Uid:970456b1-6e96-49c4-a729-644ccc2dd819,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:07.552716 containerd[1713]: time="2026-03-07T01:17:07.552681806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5848c864d8-pkblm,Uid:baaac6d9-8daf-47ff-8839-badf1443d736,Namespace:calico-system,Attempt:0,} returns sandbox id \"227a99e6aeb5c3b239e0d35a0bfb73ba3539efd2d8b81861e72da9f4a1c85c3f\"" Mar 7 01:17:07.554949 containerd[1713]: time="2026-03-07T01:17:07.554919941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:17:07.615193 containerd[1713]: time="2026-03-07T01:17:07.614708960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:07.615193 containerd[1713]: time="2026-03-07T01:17:07.614924563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:07.615193 containerd[1713]: time="2026-03-07T01:17:07.614975664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:07.615193 containerd[1713]: time="2026-03-07T01:17:07.615095166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:07.635482 systemd[1]: Started cri-containerd-7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414.scope - libcontainer container 7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414. Mar 7 01:17:07.656699 containerd[1713]: time="2026-03-07T01:17:07.656601204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rh4vc,Uid:970456b1-6e96-49c4-a729-644ccc2dd819,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\"" Mar 7 01:17:08.768673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280312775.mount: Deactivated successfully. Mar 7 01:17:08.797864 kubelet[3183]: E0307 01:17:08.796079 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:09.635063 containerd[1713]: time="2026-03-07T01:17:09.635000318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.637826 containerd[1713]: time="2026-03-07T01:17:09.637762060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:17:09.640799 containerd[1713]: time="2026-03-07T01:17:09.640736606Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.644633 containerd[1713]: time="2026-03-07T01:17:09.644528064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:09.645665 containerd[1713]: time="2026-03-07T01:17:09.645234475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.090158832s" Mar 7 01:17:09.645665 containerd[1713]: time="2026-03-07T01:17:09.645271876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:17:09.647584 containerd[1713]: time="2026-03-07T01:17:09.647016702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:17:09.670540 containerd[1713]: time="2026-03-07T01:17:09.670499463Z" level=info msg="CreateContainer within sandbox \"227a99e6aeb5c3b239e0d35a0bfb73ba3539efd2d8b81861e72da9f4a1c85c3f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:17:09.700191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524350763.mount: Deactivated successfully. Mar 7 01:17:09.713794 containerd[1713]: time="2026-03-07T01:17:09.713750428Z" level=info msg="CreateContainer within sandbox \"227a99e6aeb5c3b239e0d35a0bfb73ba3539efd2d8b81861e72da9f4a1c85c3f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b884aae4303b8ba1c4f020bc5c0d94058befcf8d8a446c3aaedc9a355a23a448\"" Mar 7 01:17:09.715657 containerd[1713]: time="2026-03-07T01:17:09.714373138Z" level=info msg="StartContainer for \"b884aae4303b8ba1c4f020bc5c0d94058befcf8d8a446c3aaedc9a355a23a448\"" Mar 7 01:17:09.747370 systemd[1]: Started cri-containerd-b884aae4303b8ba1c4f020bc5c0d94058befcf8d8a446c3aaedc9a355a23a448.scope - libcontainer container b884aae4303b8ba1c4f020bc5c0d94058befcf8d8a446c3aaedc9a355a23a448. Mar 7 01:17:09.799431 containerd[1713]: time="2026-03-07T01:17:09.799375745Z" level=info msg="StartContainer for \"b884aae4303b8ba1c4f020bc5c0d94058befcf8d8a446c3aaedc9a355a23a448\" returns successfully" Mar 7 01:17:09.885526 kubelet[3183]: I0307 01:17:09.885361 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5848c864d8-pkblm" podStartSLOduration=0.793341006 podStartE2EDuration="2.885344766s" podCreationTimestamp="2026-03-07 01:17:07 +0000 UTC" firstStartedPulling="2026-03-07 01:17:07.554257331 +0000 UTC m=+20.866074830" lastFinishedPulling="2026-03-07 01:17:09.646261091 +0000 UTC m=+22.958078590" observedRunningTime="2026-03-07 01:17:09.884501753 +0000 UTC m=+23.196319252" watchObservedRunningTime="2026-03-07 01:17:09.885344766 +0000 UTC m=+23.197162265" Mar 7 01:17:09.919128 kubelet[3183]: E0307 01:17:09.919061 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.919128 kubelet[3183]: W0307 01:17:09.919099 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.919426 kubelet[3183]: E0307 01:17:09.919141 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.919880 kubelet[3183]: E0307 01:17:09.919725 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.919880 kubelet[3183]: W0307 01:17:09.919743 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.919880 kubelet[3183]: E0307 01:17:09.919766 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.920105 kubelet[3183]: E0307 01:17:09.920088 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.920179 kubelet[3183]: W0307 01:17:09.920105 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.920179 kubelet[3183]: E0307 01:17:09.920120 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.921031 kubelet[3183]: E0307 01:17:09.921006 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.921031 kubelet[3183]: W0307 01:17:09.921027 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.921170 kubelet[3183]: E0307 01:17:09.921041 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.921609 kubelet[3183]: E0307 01:17:09.921356 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.921609 kubelet[3183]: W0307 01:17:09.921389 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.921609 kubelet[3183]: E0307 01:17:09.921404 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.921906 kubelet[3183]: E0307 01:17:09.921706 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.921906 kubelet[3183]: W0307 01:17:09.921718 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.921906 kubelet[3183]: E0307 01:17:09.921731 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.922041 kubelet[3183]: E0307 01:17:09.921989 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.922041 kubelet[3183]: W0307 01:17:09.922002 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.922041 kubelet[3183]: E0307 01:17:09.922015 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.923237 kubelet[3183]: E0307 01:17:09.922274 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.923237 kubelet[3183]: W0307 01:17:09.922286 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.923237 kubelet[3183]: E0307 01:17:09.922298 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.923237 kubelet[3183]: E0307 01:17:09.922577 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.923237 kubelet[3183]: W0307 01:17:09.922589 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.923237 kubelet[3183]: E0307 01:17:09.922601 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.923237 kubelet[3183]: E0307 01:17:09.922840 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.923237 kubelet[3183]: W0307 01:17:09.922851 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.923237 kubelet[3183]: E0307 01:17:09.922863 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.923237 kubelet[3183]: E0307 01:17:09.923127 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.923652 kubelet[3183]: W0307 01:17:09.923139 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.923652 kubelet[3183]: E0307 01:17:09.923259 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.923652 kubelet[3183]: E0307 01:17:09.923488 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.923652 kubelet[3183]: W0307 01:17:09.923498 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.923652 kubelet[3183]: E0307 01:17:09.923510 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.923875 kubelet[3183]: E0307 01:17:09.923740 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.923875 kubelet[3183]: W0307 01:17:09.923750 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.926449 kubelet[3183]: E0307 01:17:09.923762 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.926449 kubelet[3183]: E0307 01:17:09.925004 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.926449 kubelet[3183]: W0307 01:17:09.925041 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.926449 kubelet[3183]: E0307 01:17:09.925056 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.926449 kubelet[3183]: E0307 01:17:09.925302 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.926449 kubelet[3183]: W0307 01:17:09.925314 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.926449 kubelet[3183]: E0307 01:17:09.925326 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.934768 kubelet[3183]: E0307 01:17:09.934743 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.934768 kubelet[3183]: W0307 01:17:09.934765 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.934908 kubelet[3183]: E0307 01:17:09.934782 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.936169 kubelet[3183]: E0307 01:17:09.936036 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.936169 kubelet[3183]: W0307 01:17:09.936055 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.936169 kubelet[3183]: E0307 01:17:09.936070 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.937411 kubelet[3183]: E0307 01:17:09.936420 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.937411 kubelet[3183]: W0307 01:17:09.936432 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.937411 kubelet[3183]: E0307 01:17:09.936446 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.938934 kubelet[3183]: E0307 01:17:09.938396 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.938934 kubelet[3183]: W0307 01:17:09.938412 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.938934 kubelet[3183]: E0307 01:17:09.938425 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.938934 kubelet[3183]: E0307 01:17:09.938745 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.938934 kubelet[3183]: W0307 01:17:09.938757 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.938934 kubelet[3183]: E0307 01:17:09.938770 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.939261 kubelet[3183]: E0307 01:17:09.939032 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.939261 kubelet[3183]: W0307 01:17:09.939047 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.939261 kubelet[3183]: E0307 01:17:09.939060 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.939391 kubelet[3183]: E0307 01:17:09.939301 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.939391 kubelet[3183]: W0307 01:17:09.939311 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.939391 kubelet[3183]: E0307 01:17:09.939323 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.939530 kubelet[3183]: E0307 01:17:09.939499 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.939530 kubelet[3183]: W0307 01:17:09.939508 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.939530 kubelet[3183]: E0307 01:17:09.939518 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.940423 kubelet[3183]: E0307 01:17:09.940251 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.940423 kubelet[3183]: W0307 01:17:09.940269 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.940423 kubelet[3183]: E0307 01:17:09.940283 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.942204 kubelet[3183]: E0307 01:17:09.940955 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.942204 kubelet[3183]: W0307 01:17:09.940971 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.942204 kubelet[3183]: E0307 01:17:09.940986 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.942469 kubelet[3183]: E0307 01:17:09.942450 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.942544 kubelet[3183]: W0307 01:17:09.942470 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.942544 kubelet[3183]: E0307 01:17:09.942484 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.942759 kubelet[3183]: E0307 01:17:09.942739 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.942759 kubelet[3183]: W0307 01:17:09.942757 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.942875 kubelet[3183]: E0307 01:17:09.942771 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.944082 kubelet[3183]: E0307 01:17:09.944059 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.944082 kubelet[3183]: W0307 01:17:09.944079 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.944230 kubelet[3183]: E0307 01:17:09.944093 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.945689 kubelet[3183]: E0307 01:17:09.945669 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.945689 kubelet[3183]: W0307 01:17:09.945688 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.945830 kubelet[3183]: E0307 01:17:09.945704 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.948222 kubelet[3183]: E0307 01:17:09.948199 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.948222 kubelet[3183]: W0307 01:17:09.948222 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.948332 kubelet[3183]: E0307 01:17:09.948237 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.950605 kubelet[3183]: E0307 01:17:09.950589 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.951935 kubelet[3183]: W0307 01:17:09.951825 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.951935 kubelet[3183]: E0307 01:17:09.951852 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.952610 kubelet[3183]: E0307 01:17:09.952366 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.952610 kubelet[3183]: W0307 01:17:09.952382 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.952610 kubelet[3183]: E0307 01:17:09.952395 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:09.952979 kubelet[3183]: E0307 01:17:09.952965 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:09.953190 kubelet[3183]: W0307 01:17:09.953120 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:09.953190 kubelet[3183]: E0307 01:17:09.953143 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.795994 kubelet[3183]: E0307 01:17:10.795482 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:10.866265 kubelet[3183]: I0307 01:17:10.866230 3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:10.933193 kubelet[3183]: E0307 01:17:10.933051 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.933193 kubelet[3183]: W0307 01:17:10.933079 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.933193 kubelet[3183]: E0307 01:17:10.933104 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.934390 kubelet[3183]: E0307 01:17:10.933831 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.934390 kubelet[3183]: W0307 01:17:10.933851 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.934390 kubelet[3183]: E0307 01:17:10.933872 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.934390 kubelet[3183]: E0307 01:17:10.934078 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.934390 kubelet[3183]: W0307 01:17:10.934087 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.934390 kubelet[3183]: E0307 01:17:10.934098 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.934390 kubelet[3183]: E0307 01:17:10.934323 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.934390 kubelet[3183]: W0307 01:17:10.934334 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.934390 kubelet[3183]: E0307 01:17:10.934345 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.935382 kubelet[3183]: E0307 01:17:10.935284 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.935382 kubelet[3183]: W0307 01:17:10.935308 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.935382 kubelet[3183]: E0307 01:17:10.935330 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.935864 kubelet[3183]: E0307 01:17:10.935591 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.935864 kubelet[3183]: W0307 01:17:10.935603 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.935864 kubelet[3183]: E0307 01:17:10.935615 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.935864 kubelet[3183]: E0307 01:17:10.935815 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.935864 kubelet[3183]: W0307 01:17:10.935826 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.935864 kubelet[3183]: E0307 01:17:10.935838 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.936510 kubelet[3183]: E0307 01:17:10.936042 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.936510 kubelet[3183]: W0307 01:17:10.936054 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.936510 kubelet[3183]: E0307 01:17:10.936066 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.936510 kubelet[3183]: E0307 01:17:10.936314 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.936510 kubelet[3183]: W0307 01:17:10.936325 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.936510 kubelet[3183]: E0307 01:17:10.936338 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.936900 kubelet[3183]: E0307 01:17:10.936635 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.936900 kubelet[3183]: W0307 01:17:10.936646 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.936900 kubelet[3183]: E0307 01:17:10.936659 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.937615 kubelet[3183]: E0307 01:17:10.937346 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.937615 kubelet[3183]: W0307 01:17:10.937364 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.937615 kubelet[3183]: E0307 01:17:10.937378 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.937615 kubelet[3183]: E0307 01:17:10.937584 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.937615 kubelet[3183]: W0307 01:17:10.937593 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.937615 kubelet[3183]: E0307 01:17:10.937604 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.938064 kubelet[3183]: E0307 01:17:10.937814 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.938064 kubelet[3183]: W0307 01:17:10.937824 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.938064 kubelet[3183]: E0307 01:17:10.937836 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.938231 kubelet[3183]: E0307 01:17:10.938144 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.938231 kubelet[3183]: W0307 01:17:10.938196 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.938231 kubelet[3183]: E0307 01:17:10.938208 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.938477 kubelet[3183]: E0307 01:17:10.938408 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.938477 kubelet[3183]: W0307 01:17:10.938419 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.938477 kubelet[3183]: E0307 01:17:10.938430 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.946074 kubelet[3183]: E0307 01:17:10.945881 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.946074 kubelet[3183]: W0307 01:17:10.945897 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.946074 kubelet[3183]: E0307 01:17:10.945913 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.946805 kubelet[3183]: E0307 01:17:10.946466 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.946805 kubelet[3183]: W0307 01:17:10.946484 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.946805 kubelet[3183]: E0307 01:17:10.946497 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.946968 kubelet[3183]: E0307 01:17:10.946923 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.946968 kubelet[3183]: W0307 01:17:10.946936 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.946968 kubelet[3183]: E0307 01:17:10.946949 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.947443 kubelet[3183]: E0307 01:17:10.947423 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.947510 kubelet[3183]: W0307 01:17:10.947450 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.947510 kubelet[3183]: E0307 01:17:10.947465 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.948334 kubelet[3183]: E0307 01:17:10.948310 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.948334 kubelet[3183]: W0307 01:17:10.948331 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.948455 kubelet[3183]: E0307 01:17:10.948346 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.949529 kubelet[3183]: E0307 01:17:10.948799 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.949529 kubelet[3183]: W0307 01:17:10.948814 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.949529 kubelet[3183]: E0307 01:17:10.948837 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.949529 kubelet[3183]: E0307 01:17:10.949322 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.949529 kubelet[3183]: W0307 01:17:10.949334 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.949529 kubelet[3183]: E0307 01:17:10.949347 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.950465 kubelet[3183]: E0307 01:17:10.950233 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.950465 kubelet[3183]: W0307 01:17:10.950249 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.950465 kubelet[3183]: E0307 01:17:10.950263 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.950848 kubelet[3183]: E0307 01:17:10.950686 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.950848 kubelet[3183]: W0307 01:17:10.950699 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.950848 kubelet[3183]: E0307 01:17:10.950712 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.952023 kubelet[3183]: E0307 01:17:10.951913 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.952023 kubelet[3183]: W0307 01:17:10.951928 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.952023 kubelet[3183]: E0307 01:17:10.951941 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.952992 kubelet[3183]: E0307 01:17:10.952918 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.952992 kubelet[3183]: W0307 01:17:10.952933 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.952992 kubelet[3183]: E0307 01:17:10.952946 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.953196 kubelet[3183]: E0307 01:17:10.953178 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.953196 kubelet[3183]: W0307 01:17:10.953190 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.953283 kubelet[3183]: E0307 01:17:10.953202 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.954476 kubelet[3183]: E0307 01:17:10.953989 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.954476 kubelet[3183]: W0307 01:17:10.954005 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.954476 kubelet[3183]: E0307 01:17:10.954019 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.954476 kubelet[3183]: E0307 01:17:10.954378 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.954476 kubelet[3183]: W0307 01:17:10.954390 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.954476 kubelet[3183]: E0307 01:17:10.954403 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.955100 kubelet[3183]: E0307 01:17:10.954801 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.955100 kubelet[3183]: W0307 01:17:10.954813 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.955100 kubelet[3183]: E0307 01:17:10.954827 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.955250 kubelet[3183]: E0307 01:17:10.955200 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.955250 kubelet[3183]: W0307 01:17:10.955211 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.955250 kubelet[3183]: E0307 01:17:10.955225 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.955874 kubelet[3183]: E0307 01:17:10.955723 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.955874 kubelet[3183]: W0307 01:17:10.955737 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.955874 kubelet[3183]: E0307 01:17:10.955754 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:10.956300 kubelet[3183]: E0307 01:17:10.956275 3183 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:17:10.956300 kubelet[3183]: W0307 01:17:10.956293 3183 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:17:10.956419 kubelet[3183]: E0307 01:17:10.956306 3183 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:17:11.018256 containerd[1713]: time="2026-03-07T01:17:11.017271807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:11.020403 containerd[1713]: time="2026-03-07T01:17:11.020332255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:17:11.023930 containerd[1713]: time="2026-03-07T01:17:11.023613306Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:11.028753 containerd[1713]: time="2026-03-07T01:17:11.028656184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:11.029797 containerd[1713]: time="2026-03-07T01:17:11.029379295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.382326292s" Mar 7 01:17:11.029797 containerd[1713]: time="2026-03-07T01:17:11.029419796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:17:11.037030 containerd[1713]: time="2026-03-07T01:17:11.036996314Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:17:11.075296 containerd[1713]: time="2026-03-07T01:17:11.075179107Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5\"" Mar 7 01:17:11.078255 containerd[1713]: time="2026-03-07T01:17:11.076407826Z" level=info msg="StartContainer for \"a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5\"" Mar 7 01:17:11.121912 systemd[1]: Started cri-containerd-a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5.scope - libcontainer container a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5. Mar 7 01:17:11.157632 containerd[1713]: time="2026-03-07T01:17:11.157588288Z" level=info msg="StartContainer for \"a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5\" returns successfully" Mar 7 01:17:11.166063 systemd[1]: cri-containerd-a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5.scope: Deactivated successfully. Mar 7 01:17:11.653925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5-rootfs.mount: Deactivated successfully. Mar 7 01:17:12.797106 kubelet[3183]: E0307 01:17:12.795817 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:13.094891 containerd[1713]: time="2026-03-07T01:17:13.094736701Z" level=info msg="shim disconnected" id=a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5 namespace=k8s.io Mar 7 01:17:13.094891 containerd[1713]: time="2026-03-07T01:17:13.094795502Z" level=warning msg="cleaning up after shim disconnected" id=a21a27ae793e458e29e2b66e39a7ced4aaa2b2a92624e2566e8b5612497695e5 namespace=k8s.io Mar 7 01:17:13.094891 containerd[1713]: time="2026-03-07T01:17:13.094805402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:13.875026 containerd[1713]: time="2026-03-07T01:17:13.874985330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:17:14.796108 kubelet[3183]: E0307 01:17:14.796047 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:16.796879 kubelet[3183]: E0307 01:17:16.796817 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:18.800687 kubelet[3183]: E0307 01:17:18.799566 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:20.797482 kubelet[3183]: E0307 01:17:20.797439 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:21.964591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080911229.mount: Deactivated successfully. Mar 7 01:17:22.003508 containerd[1713]: time="2026-03-07T01:17:22.003451953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:22.005979 containerd[1713]: time="2026-03-07T01:17:22.005923585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:17:22.009283 containerd[1713]: time="2026-03-07T01:17:22.009227627Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:22.014665 containerd[1713]: time="2026-03-07T01:17:22.014611395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:22.015520 containerd[1713]: time="2026-03-07T01:17:22.015264803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.140228272s" Mar 7 01:17:22.015520 containerd[1713]: time="2026-03-07T01:17:22.015308504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:17:22.023031 containerd[1713]: time="2026-03-07T01:17:22.022871900Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:17:22.058510 kubelet[3183]: I0307 01:17:22.058473 3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:22.062898 containerd[1713]: time="2026-03-07T01:17:22.062732005Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd\"" Mar 7 01:17:22.063454 containerd[1713]: time="2026-03-07T01:17:22.063419614Z" level=info msg="StartContainer for \"01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd\"" Mar 7 01:17:22.115340 systemd[1]: Started cri-containerd-01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd.scope - libcontainer container 01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd. Mar 7 01:17:22.156900 containerd[1713]: time="2026-03-07T01:17:22.156852399Z" level=info msg="StartContainer for \"01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd\" returns successfully" Mar 7 01:17:22.198239 systemd[1]: cri-containerd-01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd.scope: Deactivated successfully. Mar 7 01:17:22.797574 kubelet[3183]: E0307 01:17:22.797533 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:22.962624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd-rootfs.mount: Deactivated successfully. Mar 7 01:17:24.795321 kubelet[3183]: E0307 01:17:24.795273 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:25.433196 containerd[1713]: time="2026-03-07T01:17:25.432950501Z" level=info msg="shim disconnected" id=01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd namespace=k8s.io Mar 7 01:17:25.433196 containerd[1713]: time="2026-03-07T01:17:25.433009902Z" level=warning msg="cleaning up after shim disconnected" id=01fc02c037b0c9da718379efaa253c7837207693a7cf703c9d13fb5761cda2dd namespace=k8s.io Mar 7 01:17:25.433196 containerd[1713]: time="2026-03-07T01:17:25.433023202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:25.910853 containerd[1713]: time="2026-03-07T01:17:25.910813126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:17:26.795871 kubelet[3183]: E0307 01:17:26.795819 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:28.799856 kubelet[3183]: E0307 01:17:28.799805 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:28.998890 containerd[1713]: time="2026-03-07T01:17:28.998816023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:29.003557 containerd[1713]: time="2026-03-07T01:17:29.003487395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:17:29.006439 containerd[1713]: time="2026-03-07T01:17:29.006379839Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:29.011728 containerd[1713]: time="2026-03-07T01:17:29.011581819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:29.012998 containerd[1713]: time="2026-03-07T01:17:29.012868739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.102004611s" Mar 7 01:17:29.012998 containerd[1713]: time="2026-03-07T01:17:29.012907639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:17:29.020976 containerd[1713]: time="2026-03-07T01:17:29.020948562Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:17:29.060156 containerd[1713]: time="2026-03-07T01:17:29.060039262Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b\"" Mar 7 01:17:29.060838 containerd[1713]: time="2026-03-07T01:17:29.060805673Z" level=info msg="StartContainer for \"14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b\"" Mar 7 01:17:29.112327 systemd[1]: Started cri-containerd-14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b.scope - libcontainer container 14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b. Mar 7 01:17:29.143827 containerd[1713]: time="2026-03-07T01:17:29.143778145Z" level=info msg="StartContainer for \"14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b\" returns successfully" Mar 7 01:17:30.767650 containerd[1713]: time="2026-03-07T01:17:30.767555933Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:17:30.770218 systemd[1]: cri-containerd-14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b.scope: Deactivated successfully. Mar 7 01:17:30.792310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b-rootfs.mount: Deactivated successfully. Mar 7 01:17:30.798279 kubelet[3183]: E0307 01:17:30.797136 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:30.802804 kubelet[3183]: I0307 01:17:30.801790 3183 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:17:32.034411 containerd[1713]: time="2026-03-07T01:17:32.034230348Z" level=info msg="shim disconnected" id=14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b namespace=k8s.io Mar 7 01:17:32.035097 containerd[1713]: time="2026-03-07T01:17:32.034467152Z" level=warning msg="cleaning up after shim disconnected" id=14a9596bcaff38fcec8711fd9f9943bd414502dbfed22f6d6b1c7c54cf8d1c2b namespace=k8s.io Mar 7 01:17:32.035097 containerd[1713]: time="2026-03-07T01:17:32.034488452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:32.057837 systemd[1]: Created slice kubepods-besteffort-pod39725905_1269_4e3e_b92d_ee3eace3fd6f.slice - libcontainer container kubepods-besteffort-pod39725905_1269_4e3e_b92d_ee3eace3fd6f.slice. Mar 7 01:17:32.071769 systemd[1]: Created slice kubepods-burstable-pod8ce69653_849d_4201_90d6_36652ad2c82f.slice - libcontainer container kubepods-burstable-pod8ce69653_849d_4201_90d6_36652ad2c82f.slice. Mar 7 01:17:32.083061 systemd[1]: Created slice kubepods-besteffort-pod5a80d881_150a_48f3_9427_5627e9d0bd10.slice - libcontainer container kubepods-besteffort-pod5a80d881_150a_48f3_9427_5627e9d0bd10.slice. Mar 7 01:17:32.094198 systemd[1]: Created slice kubepods-besteffort-poda95025c4_1bc2_4e58_8c10_f597fc15642d.slice - libcontainer container kubepods-besteffort-poda95025c4_1bc2_4e58_8c10_f597fc15642d.slice. Mar 7 01:17:32.099407 kubelet[3183]: I0307 01:17:32.099377 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq85q\" (UniqueName: \"kubernetes.io/projected/a95025c4-1bc2-4e58-8c10-f597fc15642d-kube-api-access-cq85q\") pod \"calico-kube-controllers-7fd94b965c-xvz8c\" (UID: \"a95025c4-1bc2-4e58-8c10-f597fc15642d\") " pod="calico-system/calico-kube-controllers-7fd94b965c-xvz8c" Mar 7 01:17:32.101652 kubelet[3183]: I0307 01:17:32.100084 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a80d881-150a-48f3-9427-5627e9d0bd10-calico-apiserver-certs\") pod \"calico-apiserver-6b85bc4966-7842l\" (UID: \"5a80d881-150a-48f3-9427-5627e9d0bd10\") " pod="calico-system/calico-apiserver-6b85bc4966-7842l" Mar 7 01:17:32.101652 kubelet[3183]: I0307 01:17:32.101541 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mvs\" (UniqueName: \"kubernetes.io/projected/5a80d881-150a-48f3-9427-5627e9d0bd10-kube-api-access-47mvs\") pod \"calico-apiserver-6b85bc4966-7842l\" (UID: \"5a80d881-150a-48f3-9427-5627e9d0bd10\") " pod="calico-system/calico-apiserver-6b85bc4966-7842l" Mar 7 01:17:32.101652 kubelet[3183]: I0307 01:17:32.101611 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b5b460bf-5f22-4761-beda-066ecd458571-calico-apiserver-certs\") pod \"calico-apiserver-6b85bc4966-hhwfv\" (UID: \"b5b460bf-5f22-4761-beda-066ecd458571\") " pod="calico-system/calico-apiserver-6b85bc4966-hhwfv" Mar 7 01:17:32.102040 kubelet[3183]: I0307 01:17:32.101887 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-nginx-config\") pod \"whisker-b9f656bd5-x4s4h\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " pod="calico-system/whisker-b9f656bd5-x4s4h" Mar 7 01:17:32.102040 kubelet[3183]: I0307 01:17:32.101938 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/202162ba-341e-473d-92ac-5b724859334f-config-volume\") pod \"coredns-674b8bbfcf-clwb7\" (UID: \"202162ba-341e-473d-92ac-5b724859334f\") " pod="kube-system/coredns-674b8bbfcf-clwb7" Mar 7 01:17:32.102040 kubelet[3183]: I0307 01:17:32.101982 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ce69653-849d-4201-90d6-36652ad2c82f-config-volume\") pod \"coredns-674b8bbfcf-dhgml\" (UID: \"8ce69653-849d-4201-90d6-36652ad2c82f\") " pod="kube-system/coredns-674b8bbfcf-dhgml" Mar 7 01:17:32.102040 kubelet[3183]: I0307 01:17:32.102005 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wvk9\" (UniqueName: \"kubernetes.io/projected/8ce69653-849d-4201-90d6-36652ad2c82f-kube-api-access-7wvk9\") pod \"coredns-674b8bbfcf-dhgml\" (UID: \"8ce69653-849d-4201-90d6-36652ad2c82f\") " pod="kube-system/coredns-674b8bbfcf-dhgml" Mar 7 01:17:32.102369 kubelet[3183]: I0307 01:17:32.102173 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d30623bf-c631-4b94-bf09-b8c8c11932cf-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-q5ltk\" (UID: \"d30623bf-c631-4b94-bf09-b8c8c11932cf\") " pod="calico-system/goldmane-5b85766d88-q5ltk" Mar 7 01:17:32.102369 kubelet[3183]: I0307 01:17:32.102211 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zmm\" (UniqueName: \"kubernetes.io/projected/d30623bf-c631-4b94-bf09-b8c8c11932cf-kube-api-access-v5zmm\") pod \"goldmane-5b85766d88-q5ltk\" (UID: \"d30623bf-c631-4b94-bf09-b8c8c11932cf\") " pod="calico-system/goldmane-5b85766d88-q5ltk" Mar 7 01:17:32.102871 kubelet[3183]: I0307 01:17:32.102703 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2nk\" (UniqueName: \"kubernetes.io/projected/202162ba-341e-473d-92ac-5b724859334f-kube-api-access-ds2nk\") pod \"coredns-674b8bbfcf-clwb7\" (UID: \"202162ba-341e-473d-92ac-5b724859334f\") " pod="kube-system/coredns-674b8bbfcf-clwb7" Mar 7 01:17:32.102871 kubelet[3183]: I0307 01:17:32.102754 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d30623bf-c631-4b94-bf09-b8c8c11932cf-goldmane-key-pair\") pod \"goldmane-5b85766d88-q5ltk\" (UID: \"d30623bf-c631-4b94-bf09-b8c8c11932cf\") " pod="calico-system/goldmane-5b85766d88-q5ltk" Mar 7 01:17:32.102871 kubelet[3183]: I0307 01:17:32.102806 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-backend-key-pair\") pod \"whisker-b9f656bd5-x4s4h\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " pod="calico-system/whisker-b9f656bd5-x4s4h" Mar 7 01:17:32.102871 kubelet[3183]: I0307 01:17:32.102837 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a95025c4-1bc2-4e58-8c10-f597fc15642d-tigera-ca-bundle\") pod \"calico-kube-controllers-7fd94b965c-xvz8c\" (UID: \"a95025c4-1bc2-4e58-8c10-f597fc15642d\") " pod="calico-system/calico-kube-controllers-7fd94b965c-xvz8c" Mar 7 01:17:32.103306 kubelet[3183]: I0307 01:17:32.102858 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppll8\" (UniqueName: \"kubernetes.io/projected/b5b460bf-5f22-4761-beda-066ecd458571-kube-api-access-ppll8\") pod \"calico-apiserver-6b85bc4966-hhwfv\" (UID: \"b5b460bf-5f22-4761-beda-066ecd458571\") " pod="calico-system/calico-apiserver-6b85bc4966-hhwfv" Mar 7 01:17:32.103352 kubelet[3183]: I0307 01:17:32.103314 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d30623bf-c631-4b94-bf09-b8c8c11932cf-config\") pod \"goldmane-5b85766d88-q5ltk\" (UID: \"d30623bf-c631-4b94-bf09-b8c8c11932cf\") " pod="calico-system/goldmane-5b85766d88-q5ltk" Mar 7 01:17:32.103401 kubelet[3183]: I0307 01:17:32.103362 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-ca-bundle\") pod \"whisker-b9f656bd5-x4s4h\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " pod="calico-system/whisker-b9f656bd5-x4s4h" Mar 7 01:17:32.103401 kubelet[3183]: I0307 01:17:32.103388 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fsn\" (UniqueName: \"kubernetes.io/projected/39725905-1269-4e3e-b92d-ee3eace3fd6f-kube-api-access-75fsn\") pod \"whisker-b9f656bd5-x4s4h\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " pod="calico-system/whisker-b9f656bd5-x4s4h" Mar 7 01:17:32.104610 systemd[1]: Created slice kubepods-burstable-pod202162ba_341e_473d_92ac_5b724859334f.slice - libcontainer container kubepods-burstable-pod202162ba_341e_473d_92ac_5b724859334f.slice. Mar 7 01:17:32.111216 systemd[1]: Created slice kubepods-besteffort-podd30623bf_c631_4b94_bf09_b8c8c11932cf.slice - libcontainer container kubepods-besteffort-podd30623bf_c631_4b94_bf09_b8c8c11932cf.slice. Mar 7 01:17:32.121339 systemd[1]: Created slice kubepods-besteffort-podb5b460bf_5f22_4761_beda_066ecd458571.slice - libcontainer container kubepods-besteffort-podb5b460bf_5f22_4761_beda_066ecd458571.slice. Mar 7 01:17:32.367297 containerd[1713]: time="2026-03-07T01:17:32.367127751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9f656bd5-x4s4h,Uid:39725905-1269-4e3e-b92d-ee3eace3fd6f,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:32.381816 containerd[1713]: time="2026-03-07T01:17:32.381772075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhgml,Uid:8ce69653-849d-4201-90d6-36652ad2c82f,Namespace:kube-system,Attempt:0,}" Mar 7 01:17:32.388740 containerd[1713]: time="2026-03-07T01:17:32.388703682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-7842l,Uid:5a80d881-150a-48f3-9427-5627e9d0bd10,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:32.406947 containerd[1713]: time="2026-03-07T01:17:32.406900861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd94b965c-xvz8c,Uid:a95025c4-1bc2-4e58-8c10-f597fc15642d,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:32.408821 containerd[1713]: time="2026-03-07T01:17:32.408786389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clwb7,Uid:202162ba-341e-473d-92ac-5b724859334f,Namespace:kube-system,Attempt:0,}" Mar 7 01:17:32.416708 containerd[1713]: time="2026-03-07T01:17:32.416674610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q5ltk,Uid:d30623bf-c631-4b94-bf09-b8c8c11932cf,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:32.425654 containerd[1713]: time="2026-03-07T01:17:32.425617747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-hhwfv,Uid:b5b460bf-5f22-4761-beda-066ecd458571,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:32.564967 containerd[1713]: time="2026-03-07T01:17:32.564906582Z" level=error msg="Failed to destroy network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.567210 containerd[1713]: time="2026-03-07T01:17:32.566510607Z" level=error msg="encountered an error cleaning up failed sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.567210 containerd[1713]: time="2026-03-07T01:17:32.566586808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b9f656bd5-x4s4h,Uid:39725905-1269-4e3e-b92d-ee3eace3fd6f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.567698 kubelet[3183]: E0307 01:17:32.567655 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.568308 kubelet[3183]: E0307 01:17:32.568272 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b9f656bd5-x4s4h" Mar 7 01:17:32.568416 kubelet[3183]: E0307 01:17:32.568317 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b9f656bd5-x4s4h" Mar 7 01:17:32.568416 kubelet[3183]: E0307 01:17:32.568385 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b9f656bd5-x4s4h_calico-system(39725905-1269-4e3e-b92d-ee3eace3fd6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b9f656bd5-x4s4h_calico-system(39725905-1269-4e3e-b92d-ee3eace3fd6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b9f656bd5-x4s4h" podUID="39725905-1269-4e3e-b92d-ee3eace3fd6f" Mar 7 01:17:32.716274 containerd[1713]: time="2026-03-07T01:17:32.716222602Z" level=error msg="Failed to destroy network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.716582 containerd[1713]: time="2026-03-07T01:17:32.716544607Z" level=error msg="encountered an error cleaning up failed sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.716669 containerd[1713]: time="2026-03-07T01:17:32.716621108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhgml,Uid:8ce69653-849d-4201-90d6-36652ad2c82f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.716910 kubelet[3183]: E0307 01:17:32.716862 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.717399 kubelet[3183]: E0307 01:17:32.717351 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dhgml" Mar 7 01:17:32.717559 kubelet[3183]: E0307 01:17:32.717536 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dhgml" Mar 7 01:17:32.717762 kubelet[3183]: E0307 01:17:32.717728 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dhgml_kube-system(8ce69653-849d-4201-90d6-36652ad2c82f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dhgml_kube-system(8ce69653-849d-4201-90d6-36652ad2c82f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dhgml" podUID="8ce69653-849d-4201-90d6-36652ad2c82f" Mar 7 01:17:32.730862 containerd[1713]: time="2026-03-07T01:17:32.730797525Z" level=error msg="Failed to destroy network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.732419 containerd[1713]: time="2026-03-07T01:17:32.732377649Z" level=error msg="encountered an error cleaning up failed sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.734092 containerd[1713]: time="2026-03-07T01:17:32.733009359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd94b965c-xvz8c,Uid:a95025c4-1bc2-4e58-8c10-f597fc15642d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.734592 kubelet[3183]: E0307 01:17:32.734520 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.734592 kubelet[3183]: E0307 01:17:32.734580 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd94b965c-xvz8c" Mar 7 01:17:32.735010 kubelet[3183]: E0307 01:17:32.734611 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd94b965c-xvz8c" Mar 7 01:17:32.735010 kubelet[3183]: E0307 01:17:32.734671 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fd94b965c-xvz8c_calico-system(a95025c4-1bc2-4e58-8c10-f597fc15642d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fd94b965c-xvz8c_calico-system(a95025c4-1bc2-4e58-8c10-f597fc15642d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd94b965c-xvz8c" podUID="a95025c4-1bc2-4e58-8c10-f597fc15642d" Mar 7 01:17:32.750843 containerd[1713]: time="2026-03-07T01:17:32.750790932Z" level=error msg="Failed to destroy network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.751561 containerd[1713]: time="2026-03-07T01:17:32.751244938Z" level=error msg="encountered an error cleaning up failed sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.751561 containerd[1713]: time="2026-03-07T01:17:32.751319540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-hhwfv,Uid:b5b460bf-5f22-4761-beda-066ecd458571,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.751809 kubelet[3183]: E0307 01:17:32.751766 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.751916 kubelet[3183]: E0307 01:17:32.751841 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b85bc4966-hhwfv" Mar 7 01:17:32.752104 kubelet[3183]: E0307 01:17:32.751872 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b85bc4966-hhwfv" Mar 7 01:17:32.752617 kubelet[3183]: E0307 01:17:32.752144 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b85bc4966-hhwfv_calico-system(b5b460bf-5f22-4761-beda-066ecd458571)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b85bc4966-hhwfv_calico-system(b5b460bf-5f22-4761-beda-066ecd458571)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b85bc4966-hhwfv" podUID="b5b460bf-5f22-4761-beda-066ecd458571" Mar 7 01:17:32.782067 containerd[1713]: time="2026-03-07T01:17:32.781516502Z" level=error msg="Failed to destroy network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.782067 containerd[1713]: time="2026-03-07T01:17:32.781878808Z" level=error msg="encountered an error cleaning up failed sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.782067 containerd[1713]: time="2026-03-07T01:17:32.781944809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-7842l,Uid:5a80d881-150a-48f3-9427-5627e9d0bd10,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.782366 kubelet[3183]: E0307 01:17:32.782306 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.782432 kubelet[3183]: E0307 01:17:32.782372 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b85bc4966-7842l" Mar 7 01:17:32.782432 kubelet[3183]: E0307 01:17:32.782397 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b85bc4966-7842l" Mar 7 01:17:32.782522 kubelet[3183]: E0307 01:17:32.782459 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b85bc4966-7842l_calico-system(5a80d881-150a-48f3-9427-5627e9d0bd10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b85bc4966-7842l_calico-system(5a80d881-150a-48f3-9427-5627e9d0bd10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b85bc4966-7842l" podUID="5a80d881-150a-48f3-9427-5627e9d0bd10" Mar 7 01:17:32.785582 containerd[1713]: time="2026-03-07T01:17:32.784849854Z" level=error msg="Failed to destroy network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.785582 containerd[1713]: time="2026-03-07T01:17:32.785325561Z" level=error msg="encountered an error cleaning up failed sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.785582 containerd[1713]: time="2026-03-07T01:17:32.785412562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clwb7,Uid:202162ba-341e-473d-92ac-5b724859334f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.785801 kubelet[3183]: E0307 01:17:32.785625 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.785801 kubelet[3183]: E0307 01:17:32.785695 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-clwb7" Mar 7 01:17:32.785801 kubelet[3183]: E0307 01:17:32.785720 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-clwb7" Mar 7 01:17:32.785959 kubelet[3183]: E0307 01:17:32.785791 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-clwb7_kube-system(202162ba-341e-473d-92ac-5b724859334f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-clwb7_kube-system(202162ba-341e-473d-92ac-5b724859334f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-clwb7" podUID="202162ba-341e-473d-92ac-5b724859334f" Mar 7 01:17:32.788012 containerd[1713]: time="2026-03-07T01:17:32.787973601Z" level=error msg="Failed to destroy network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.788287 containerd[1713]: time="2026-03-07T01:17:32.788257306Z" level=error msg="encountered an error cleaning up failed sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.788395 containerd[1713]: time="2026-03-07T01:17:32.788311507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q5ltk,Uid:d30623bf-c631-4b94-bf09-b8c8c11932cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.788493 kubelet[3183]: E0307 01:17:32.788467 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.788553 kubelet[3183]: E0307 01:17:32.788511 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-q5ltk" Mar 7 01:17:32.788553 kubelet[3183]: E0307 01:17:32.788534 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-q5ltk" Mar 7 01:17:32.788646 kubelet[3183]: E0307 01:17:32.788580 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-q5ltk_calico-system(d30623bf-c631-4b94-bf09-b8c8c11932cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-q5ltk_calico-system(d30623bf-c631-4b94-bf09-b8c8c11932cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-q5ltk" podUID="d30623bf-c631-4b94-bf09-b8c8c11932cf" Mar 7 01:17:32.803066 systemd[1]: Created slice kubepods-besteffort-pod422fccbd_8b44_46d1_b23a_b122dabbbb7c.slice - libcontainer container kubepods-besteffort-pod422fccbd_8b44_46d1_b23a_b122dabbbb7c.slice. Mar 7 01:17:32.805541 containerd[1713]: time="2026-03-07T01:17:32.805510970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms76b,Uid:422fccbd-8b44-46d1-b23a-b122dabbbb7c,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:32.878099 containerd[1713]: time="2026-03-07T01:17:32.878045782Z" level=error msg="Failed to destroy network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.878431 containerd[1713]: time="2026-03-07T01:17:32.878395687Z" level=error msg="encountered an error cleaning up failed sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.878561 containerd[1713]: time="2026-03-07T01:17:32.878466588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms76b,Uid:422fccbd-8b44-46d1-b23a-b122dabbbb7c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.878748 kubelet[3183]: E0307 01:17:32.878707 3183 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:32.878838 kubelet[3183]: E0307 01:17:32.878789 3183 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms76b" Mar 7 01:17:32.878887 kubelet[3183]: E0307 01:17:32.878836 3183 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ms76b" Mar 7 01:17:32.879181 kubelet[3183]: E0307 01:17:32.878926 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ms76b_calico-system(422fccbd-8b44-46d1-b23a-b122dabbbb7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ms76b_calico-system(422fccbd-8b44-46d1-b23a-b122dabbbb7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:32.929590 kubelet[3183]: I0307 01:17:32.929513 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:17:32.934356 kubelet[3183]: I0307 01:17:32.933926 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:32.934642 containerd[1713]: time="2026-03-07T01:17:32.934513548Z" level=info msg="StopPodSandbox for \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\"" Mar 7 01:17:32.934743 containerd[1713]: time="2026-03-07T01:17:32.934714751Z" level=info msg="Ensure that sandbox 1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295 in task-service has been cleanup successfully" Mar 7 01:17:32.935182 containerd[1713]: time="2026-03-07T01:17:32.934515348Z" level=info msg="StopPodSandbox for \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\"" Mar 7 01:17:32.935182 containerd[1713]: time="2026-03-07T01:17:32.934960654Z" level=info msg="Ensure that sandbox 91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d in task-service has been cleanup successfully" Mar 7 01:17:32.943259 kubelet[3183]: I0307 01:17:32.943129 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:32.945972 containerd[1713]: time="2026-03-07T01:17:32.945200511Z" level=info msg="StopPodSandbox for \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\"" Mar 7 01:17:32.945972 containerd[1713]: time="2026-03-07T01:17:32.945415415Z" level=info msg="Ensure that sandbox a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a in task-service has been cleanup successfully" Mar 7 01:17:32.946863 kubelet[3183]: I0307 01:17:32.946842 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:17:32.951389 containerd[1713]: time="2026-03-07T01:17:32.951353406Z" level=info msg="StopPodSandbox for \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\"" Mar 7 01:17:32.951597 containerd[1713]: time="2026-03-07T01:17:32.951568809Z" level=info msg="Ensure that sandbox 5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4 in task-service has been cleanup successfully" Mar 7 01:17:32.953749 containerd[1713]: time="2026-03-07T01:17:32.953619140Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:17:32.969803 kubelet[3183]: I0307 01:17:32.969105 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:32.974597 containerd[1713]: time="2026-03-07T01:17:32.974558561Z" level=info msg="StopPodSandbox for \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\"" Mar 7 01:17:32.974905 containerd[1713]: time="2026-03-07T01:17:32.974879466Z" level=info msg="Ensure that sandbox 91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe in task-service has been cleanup successfully" Mar 7 01:17:32.980301 kubelet[3183]: I0307 01:17:32.980276 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:32.988078 containerd[1713]: time="2026-03-07T01:17:32.988046068Z" level=info msg="StopPodSandbox for \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\"" Mar 7 01:17:32.993907 containerd[1713]: time="2026-03-07T01:17:32.993876457Z" level=info msg="Ensure that sandbox b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c in task-service has been cleanup successfully" Mar 7 01:17:33.041388 containerd[1713]: time="2026-03-07T01:17:33.041323785Z" level=error msg="StopPodSandbox for \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\" failed" error="failed to destroy network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.042811 kubelet[3183]: I0307 01:17:33.042394 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:17:33.043308 kubelet[3183]: E0307 01:17:33.043081 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:17:33.043308 kubelet[3183]: E0307 01:17:33.043135 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4"} Mar 7 01:17:33.043308 kubelet[3183]: E0307 01:17:33.043212 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5b460bf-5f22-4761-beda-066ecd458571\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.043308 kubelet[3183]: E0307 01:17:33.043239 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5b460bf-5f22-4761-beda-066ecd458571\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b85bc4966-hhwfv" podUID="b5b460bf-5f22-4761-beda-066ecd458571" Mar 7 01:17:33.044103 containerd[1713]: time="2026-03-07T01:17:33.043834323Z" level=info msg="StopPodSandbox for \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\"" Mar 7 01:17:33.045450 containerd[1713]: time="2026-03-07T01:17:33.045420347Z" level=info msg="Ensure that sandbox c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed in task-service has been cleanup successfully" Mar 7 01:17:33.053108 kubelet[3183]: I0307 01:17:33.052531 3183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:33.061082 containerd[1713]: time="2026-03-07T01:17:33.060941585Z" level=info msg="StopPodSandbox for \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\"" Mar 7 01:17:33.062269 containerd[1713]: time="2026-03-07T01:17:33.062230405Z" level=info msg="Ensure that sandbox 24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d in task-service has been cleanup successfully" Mar 7 01:17:33.074741 containerd[1713]: time="2026-03-07T01:17:33.074703996Z" level=info msg="CreateContainer within sandbox \"7e02029808b215277ecffd66c85a02cd9631e3cd9fdb04a16a214f02883f4414\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb\"" Mar 7 01:17:33.075481 containerd[1713]: time="2026-03-07T01:17:33.075423807Z" level=info msg="StartContainer for \"f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb\"" Mar 7 01:17:33.120179 containerd[1713]: time="2026-03-07T01:17:33.119660385Z" level=error msg="StopPodSandbox for \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\" failed" error="failed to destroy network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.120341 kubelet[3183]: E0307 01:17:33.119899 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:33.120341 kubelet[3183]: E0307 01:17:33.119953 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295"} Mar 7 01:17:33.120341 kubelet[3183]: E0307 01:17:33.119996 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a95025c4-1bc2-4e58-8c10-f597fc15642d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.120341 kubelet[3183]: E0307 01:17:33.120027 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a95025c4-1bc2-4e58-8c10-f597fc15642d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd94b965c-xvz8c" podUID="a95025c4-1bc2-4e58-8c10-f597fc15642d" Mar 7 01:17:33.126817 containerd[1713]: time="2026-03-07T01:17:33.126764394Z" level=error msg="StopPodSandbox for \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\" failed" error="failed to destroy network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.127337 kubelet[3183]: E0307 01:17:33.127184 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:33.127337 kubelet[3183]: E0307 01:17:33.127232 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a"} Mar 7 01:17:33.127337 kubelet[3183]: E0307 01:17:33.127269 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39725905-1269-4e3e-b92d-ee3eace3fd6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.127337 kubelet[3183]: E0307 01:17:33.127298 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39725905-1269-4e3e-b92d-ee3eace3fd6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b9f656bd5-x4s4h" podUID="39725905-1269-4e3e-b92d-ee3eace3fd6f" Mar 7 01:17:33.127597 containerd[1713]: time="2026-03-07T01:17:33.127405204Z" level=error msg="StopPodSandbox for \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\" failed" error="failed to destroy network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.127914 kubelet[3183]: E0307 01:17:33.127785 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:33.127914 kubelet[3183]: E0307 01:17:33.127826 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe"} Mar 7 01:17:33.127914 kubelet[3183]: E0307 01:17:33.127858 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d30623bf-c631-4b94-bf09-b8c8c11932cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.127914 kubelet[3183]: E0307 01:17:33.127882 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d30623bf-c631-4b94-bf09-b8c8c11932cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-q5ltk" podUID="d30623bf-c631-4b94-bf09-b8c8c11932cf" Mar 7 01:17:33.138291 containerd[1713]: time="2026-03-07T01:17:33.138255870Z" level=error msg="StopPodSandbox for \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\" failed" error="failed to destroy network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.138685 kubelet[3183]: E0307 01:17:33.138545 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:17:33.138685 kubelet[3183]: E0307 01:17:33.138597 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d"} Mar 7 01:17:33.138685 kubelet[3183]: E0307 01:17:33.138631 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"422fccbd-8b44-46d1-b23a-b122dabbbb7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.138685 kubelet[3183]: E0307 01:17:33.138658 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"422fccbd-8b44-46d1-b23a-b122dabbbb7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ms76b" podUID="422fccbd-8b44-46d1-b23a-b122dabbbb7c" Mar 7 01:17:33.165339 systemd[1]: Started cri-containerd-f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb.scope - libcontainer container f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb. Mar 7 01:17:33.167702 containerd[1713]: time="2026-03-07T01:17:33.167654621Z" level=error msg="StopPodSandbox for \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\" failed" error="failed to destroy network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.168177 kubelet[3183]: E0307 01:17:33.167842 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:17:33.168177 kubelet[3183]: E0307 01:17:33.167908 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed"} Mar 7 01:17:33.168177 kubelet[3183]: E0307 01:17:33.167946 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a80d881-150a-48f3-9427-5627e9d0bd10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.168177 kubelet[3183]: E0307 01:17:33.168046 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a80d881-150a-48f3-9427-5627e9d0bd10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b85bc4966-7842l" podUID="5a80d881-150a-48f3-9427-5627e9d0bd10" Mar 7 01:17:33.172180 containerd[1713]: time="2026-03-07T01:17:33.172122389Z" level=error msg="StopPodSandbox for \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\" failed" error="failed to destroy network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.172870 kubelet[3183]: E0307 01:17:33.172428 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:33.172870 kubelet[3183]: E0307 01:17:33.172506 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c"} Mar 7 01:17:33.172870 kubelet[3183]: E0307 01:17:33.172561 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"202162ba-341e-473d-92ac-5b724859334f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.172870 kubelet[3183]: E0307 01:17:33.172590 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"202162ba-341e-473d-92ac-5b724859334f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-clwb7" podUID="202162ba-341e-473d-92ac-5b724859334f" Mar 7 01:17:33.173471 containerd[1713]: time="2026-03-07T01:17:33.173396609Z" level=error msg="StopPodSandbox for \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\" failed" error="failed to destroy network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:17:33.174023 kubelet[3183]: E0307 01:17:33.173868 3183 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:33.174023 kubelet[3183]: E0307 01:17:33.173916 3183 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d"} Mar 7 01:17:33.174023 kubelet[3183]: E0307 01:17:33.173953 3183 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ce69653-849d-4201-90d6-36652ad2c82f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:17:33.174023 kubelet[3183]: E0307 01:17:33.173982 3183 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ce69653-849d-4201-90d6-36652ad2c82f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dhgml" podUID="8ce69653-849d-4201-90d6-36652ad2c82f" Mar 7 01:17:33.205680 containerd[1713]: time="2026-03-07T01:17:33.203591972Z" level=info msg="StartContainer for \"f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb\" returns successfully" Mar 7 01:17:34.058682 containerd[1713]: time="2026-03-07T01:17:34.058638478Z" level=info msg="StopPodSandbox for \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\"" Mar 7 01:17:34.098014 kubelet[3183]: I0307 01:17:34.097950 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rh4vc" podStartSLOduration=5.741873749 podStartE2EDuration="27.09792798s" podCreationTimestamp="2026-03-07 01:17:07 +0000 UTC" firstStartedPulling="2026-03-07 01:17:07.657981825 +0000 UTC m=+20.969799324" lastFinishedPulling="2026-03-07 01:17:29.014036056 +0000 UTC m=+42.325853555" observedRunningTime="2026-03-07 01:17:34.089774855 +0000 UTC m=+47.401592354" watchObservedRunningTime="2026-03-07 01:17:34.09792798 +0000 UTC m=+47.409745479" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.150 [INFO][4432] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.150 [INFO][4432] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" iface="eth0" netns="/var/run/netns/cni-ad4b5298-99c1-7b0e-ee2a-c50f74156647" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.151 [INFO][4432] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" iface="eth0" netns="/var/run/netns/cni-ad4b5298-99c1-7b0e-ee2a-c50f74156647" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.152 [INFO][4432] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" iface="eth0" netns="/var/run/netns/cni-ad4b5298-99c1-7b0e-ee2a-c50f74156647" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.152 [INFO][4432] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.152 [INFO][4432] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.179 [INFO][4460] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.179 [INFO][4460] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.180 [INFO][4460] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.185 [WARNING][4460] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.185 [INFO][4460] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.186 [INFO][4460] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:34.190807 containerd[1713]: 2026-03-07 01:17:34.189 [INFO][4432] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:34.191530 containerd[1713]: time="2026-03-07T01:17:34.191057307Z" level=info msg="TearDown network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\" successfully" Mar 7 01:17:34.191530 containerd[1713]: time="2026-03-07T01:17:34.191110208Z" level=info msg="StopPodSandbox for \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\" returns successfully" Mar 7 01:17:34.194978 systemd[1]: run-netns-cni\x2dad4b5298\x2d99c1\x2d7b0e\x2dee2a\x2dc50f74156647.mount: Deactivated successfully. Mar 7 01:17:34.219536 kubelet[3183]: I0307 01:17:34.219171 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-backend-key-pair\") pod \"39725905-1269-4e3e-b92d-ee3eace3fd6f\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " Mar 7 01:17:34.219536 kubelet[3183]: I0307 01:17:34.219312 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75fsn\" (UniqueName: \"kubernetes.io/projected/39725905-1269-4e3e-b92d-ee3eace3fd6f-kube-api-access-75fsn\") pod \"39725905-1269-4e3e-b92d-ee3eace3fd6f\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " Mar 7 01:17:34.219536 kubelet[3183]: I0307 01:17:34.219340 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-ca-bundle\") pod \"39725905-1269-4e3e-b92d-ee3eace3fd6f\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " Mar 7 01:17:34.219536 kubelet[3183]: I0307 01:17:34.219382 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-nginx-config\") pod \"39725905-1269-4e3e-b92d-ee3eace3fd6f\" (UID: \"39725905-1269-4e3e-b92d-ee3eace3fd6f\") " Mar 7 01:17:34.220089 kubelet[3183]: I0307 01:17:34.219870 3183 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "39725905-1269-4e3e-b92d-ee3eace3fd6f" (UID: "39725905-1269-4e3e-b92d-ee3eace3fd6f"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:17:34.221901 kubelet[3183]: I0307 01:17:34.221547 3183 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "39725905-1269-4e3e-b92d-ee3eace3fd6f" (UID: "39725905-1269-4e3e-b92d-ee3eace3fd6f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:17:34.224056 kubelet[3183]: I0307 01:17:34.224013 3183 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "39725905-1269-4e3e-b92d-ee3eace3fd6f" (UID: "39725905-1269-4e3e-b92d-ee3eace3fd6f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:17:34.227376 kubelet[3183]: I0307 01:17:34.227350 3183 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39725905-1269-4e3e-b92d-ee3eace3fd6f-kube-api-access-75fsn" (OuterVolumeSpecName: "kube-api-access-75fsn") pod "39725905-1269-4e3e-b92d-ee3eace3fd6f" (UID: "39725905-1269-4e3e-b92d-ee3eace3fd6f"). InnerVolumeSpecName "kube-api-access-75fsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:17:34.227406 systemd[1]: var-lib-kubelet-pods-39725905\x2d1269\x2d4e3e\x2db92d\x2dee3eace3fd6f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75fsn.mount: Deactivated successfully. Mar 7 01:17:34.227531 systemd[1]: var-lib-kubelet-pods-39725905\x2d1269\x2d4e3e\x2db92d\x2dee3eace3fd6f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:17:34.320654 kubelet[3183]: I0307 01:17:34.320518 3183 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-ca-bundle\") on node \"ci-4081.3.6-n-baf9cf72b8\" DevicePath \"\"" Mar 7 01:17:34.320654 kubelet[3183]: I0307 01:17:34.320560 3183 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/39725905-1269-4e3e-b92d-ee3eace3fd6f-nginx-config\") on node \"ci-4081.3.6-n-baf9cf72b8\" DevicePath \"\"" Mar 7 01:17:34.320654 kubelet[3183]: I0307 01:17:34.320574 3183 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39725905-1269-4e3e-b92d-ee3eace3fd6f-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-baf9cf72b8\" DevicePath \"\"" Mar 7 01:17:34.320654 kubelet[3183]: I0307 01:17:34.320585 3183 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-75fsn\" (UniqueName: \"kubernetes.io/projected/39725905-1269-4e3e-b92d-ee3eace3fd6f-kube-api-access-75fsn\") on node \"ci-4081.3.6-n-baf9cf72b8\" DevicePath \"\"" Mar 7 01:17:34.810276 systemd[1]: Removed slice kubepods-besteffort-pod39725905_1269_4e3e_b92d_ee3eace3fd6f.slice - libcontainer container kubepods-besteffort-pod39725905_1269_4e3e_b92d_ee3eace3fd6f.slice. Mar 7 01:17:34.960178 kernel: calico-node[4494]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:17:35.103049 systemd[1]: run-containerd-runc-k8s.io-f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb-runc.sEZWaf.mount: Deactivated successfully. Mar 7 01:17:35.206493 systemd[1]: Created slice kubepods-besteffort-pod363231c9_84e9_4444_967d_c36438de947b.slice - libcontainer container kubepods-besteffort-pod363231c9_84e9_4444_967d_c36438de947b.slice. Mar 7 01:17:35.231035 kubelet[3183]: I0307 01:17:35.230992 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/363231c9-84e9-4444-967d-c36438de947b-whisker-ca-bundle\") pod \"whisker-f799dc74-nnsmh\" (UID: \"363231c9-84e9-4444-967d-c36438de947b\") " pod="calico-system/whisker-f799dc74-nnsmh" Mar 7 01:17:35.231035 kubelet[3183]: I0307 01:17:35.231050 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/363231c9-84e9-4444-967d-c36438de947b-whisker-backend-key-pair\") pod \"whisker-f799dc74-nnsmh\" (UID: \"363231c9-84e9-4444-967d-c36438de947b\") " pod="calico-system/whisker-f799dc74-nnsmh" Mar 7 01:17:35.231558 kubelet[3183]: I0307 01:17:35.231073 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmdjs\" (UniqueName: \"kubernetes.io/projected/363231c9-84e9-4444-967d-c36438de947b-kube-api-access-zmdjs\") pod \"whisker-f799dc74-nnsmh\" (UID: \"363231c9-84e9-4444-967d-c36438de947b\") " pod="calico-system/whisker-f799dc74-nnsmh" Mar 7 01:17:35.231558 kubelet[3183]: I0307 01:17:35.231111 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/363231c9-84e9-4444-967d-c36438de947b-nginx-config\") pod \"whisker-f799dc74-nnsmh\" (UID: \"363231c9-84e9-4444-967d-c36438de947b\") " pod="calico-system/whisker-f799dc74-nnsmh" Mar 7 01:17:35.511771 containerd[1713]: time="2026-03-07T01:17:35.511719633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f799dc74-nnsmh,Uid:363231c9-84e9-4444-967d-c36438de947b,Namespace:calico-system,Attempt:0,}" Mar 7 01:17:35.715195 systemd-networkd[1579]: cali97ad24b0ca9: Link UP Mar 7 01:17:35.715496 systemd-networkd[1579]: cali97ad24b0ca9: Gained carrier Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.603 [INFO][4616] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0 whisker-f799dc74- calico-system 363231c9-84e9-4444-967d-c36438de947b 925 0 2026-03-07 01:17:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f799dc74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 whisker-f799dc74-nnsmh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali97ad24b0ca9 [] [] }} ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.603 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.637 [INFO][4629] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" HandleID="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.647 [INFO][4629] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" HandleID="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"whisker-f799dc74-nnsmh", "timestamp":"2026-03-07 01:17:35.637770173 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001fcdc0)} Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.647 [INFO][4629] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.647 [INFO][4629] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.647 [INFO][4629] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.650 [INFO][4629] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.655 [INFO][4629] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.660 [INFO][4629] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.662 [INFO][4629] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.664 [INFO][4629] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.664 [INFO][4629] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.668 [INFO][4629] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.674 [INFO][4629] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.691 [INFO][4629] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.129/26] block=192.168.74.128/26 handle="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.691 [INFO][4629] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.129/26] handle="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.691 [INFO][4629] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:35.740998 containerd[1713]: 2026-03-07 01:17:35.691 [INFO][4629] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.129/26] IPv6=[] ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" HandleID="k8s-pod-network.6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" Mar 7 01:17:35.742051 containerd[1713]: 2026-03-07 01:17:35.696 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0", GenerateName:"whisker-f799dc74-", Namespace:"calico-system", SelfLink:"", UID:"363231c9-84e9-4444-967d-c36438de947b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f799dc74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"whisker-f799dc74-nnsmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali97ad24b0ca9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:35.742051 containerd[1713]: 2026-03-07 01:17:35.697 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.129/32] ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" Mar 7 01:17:35.742051 containerd[1713]: 2026-03-07 01:17:35.697 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97ad24b0ca9 ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" Mar 7 01:17:35.742051 containerd[1713]: 2026-03-07 01:17:35.713 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" Mar 7 01:17:35.742051 containerd[1713]: 2026-03-07 01:17:35.713 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0", GenerateName:"whisker-f799dc74-", Namespace:"calico-system", SelfLink:"", UID:"363231c9-84e9-4444-967d-c36438de947b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f799dc74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb", Pod:"whisker-f799dc74-nnsmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali97ad24b0ca9", MAC:"36:36:8f:d6:17:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:35.742051 containerd[1713]: 2026-03-07 01:17:35.736 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb" Namespace="calico-system" Pod="whisker-f799dc74-nnsmh" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--f799dc74--nnsmh-eth0" Mar 7 01:17:35.777591 containerd[1713]: time="2026-03-07T01:17:35.774715081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:35.777591 containerd[1713]: time="2026-03-07T01:17:35.774775482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:35.777591 containerd[1713]: time="2026-03-07T01:17:35.774797082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:35.777591 containerd[1713]: time="2026-03-07T01:17:35.774877783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:35.810901 systemd-networkd[1579]: vxlan.calico: Link UP Mar 7 01:17:35.810918 systemd-networkd[1579]: vxlan.calico: Gained carrier Mar 7 01:17:35.814309 systemd[1]: Started cri-containerd-6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb.scope - libcontainer container 6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb. Mar 7 01:17:35.893809 containerd[1713]: time="2026-03-07T01:17:35.893761613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f799dc74-nnsmh,Uid:363231c9-84e9-4444-967d-c36438de947b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb\"" Mar 7 01:17:35.895896 containerd[1713]: time="2026-03-07T01:17:35.895855146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:17:36.799981 kubelet[3183]: I0307 01:17:36.799916 3183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39725905-1269-4e3e-b92d-ee3eace3fd6f" path="/var/lib/kubelet/pods/39725905-1269-4e3e-b92d-ee3eace3fd6f/volumes" Mar 7 01:17:36.973618 systemd-networkd[1579]: cali97ad24b0ca9: Gained IPv6LL Mar 7 01:17:37.476619 containerd[1713]: time="2026-03-07T01:17:37.476565477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:37.481508 containerd[1713]: time="2026-03-07T01:17:37.481435352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:17:37.485520 containerd[1713]: time="2026-03-07T01:17:37.485371912Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:37.490095 containerd[1713]: time="2026-03-07T01:17:37.490031484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:37.490871 containerd[1713]: time="2026-03-07T01:17:37.490711695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.594816948s" Mar 7 01:17:37.490871 containerd[1713]: time="2026-03-07T01:17:37.490752095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:17:37.499389 containerd[1713]: time="2026-03-07T01:17:37.499354928Z" level=info msg="CreateContainer within sandbox \"6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:17:37.541302 containerd[1713]: time="2026-03-07T01:17:37.541260173Z" level=info msg="CreateContainer within sandbox \"6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"161e9c8dd9d4cb80e41ccc0e564acfe7121ee17844b1b38aed7471a2f948ac70\"" Mar 7 01:17:37.542892 containerd[1713]: time="2026-03-07T01:17:37.542013284Z" level=info msg="StartContainer for \"161e9c8dd9d4cb80e41ccc0e564acfe7121ee17844b1b38aed7471a2f948ac70\"" Mar 7 01:17:37.577283 systemd[1]: Started cri-containerd-161e9c8dd9d4cb80e41ccc0e564acfe7121ee17844b1b38aed7471a2f948ac70.scope - libcontainer container 161e9c8dd9d4cb80e41ccc0e564acfe7121ee17844b1b38aed7471a2f948ac70. Mar 7 01:17:37.615703 systemd-networkd[1579]: vxlan.calico: Gained IPv6LL Mar 7 01:17:37.624798 containerd[1713]: time="2026-03-07T01:17:37.624753558Z" level=info msg="StartContainer for \"161e9c8dd9d4cb80e41ccc0e564acfe7121ee17844b1b38aed7471a2f948ac70\" returns successfully" Mar 7 01:17:37.627819 containerd[1713]: time="2026-03-07T01:17:37.627562301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:17:39.277961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123257834.mount: Deactivated successfully. Mar 7 01:17:39.344581 containerd[1713]: time="2026-03-07T01:17:39.344463829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:39.347798 containerd[1713]: time="2026-03-07T01:17:39.347705678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:17:39.350981 containerd[1713]: time="2026-03-07T01:17:39.350929328Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:39.355321 containerd[1713]: time="2026-03-07T01:17:39.355270795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:39.356631 containerd[1713]: time="2026-03-07T01:17:39.356116908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.728499406s" Mar 7 01:17:39.356631 containerd[1713]: time="2026-03-07T01:17:39.356186509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:17:39.363609 containerd[1713]: time="2026-03-07T01:17:39.363580223Z" level=info msg="CreateContainer within sandbox \"6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:17:39.403807 containerd[1713]: time="2026-03-07T01:17:39.403766441Z" level=info msg="CreateContainer within sandbox \"6f15d838f3a7687b8879c9c8af35b5fa7569bba4fe15cce62cfee39626e200cb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"80c4c90fe778cc38a3d9a683ec9e909ba6818bb57b6f39898133d9c182e80793\"" Mar 7 01:17:39.405853 containerd[1713]: time="2026-03-07T01:17:39.404505753Z" level=info msg="StartContainer for \"80c4c90fe778cc38a3d9a683ec9e909ba6818bb57b6f39898133d9c182e80793\"" Mar 7 01:17:39.443507 systemd[1]: Started cri-containerd-80c4c90fe778cc38a3d9a683ec9e909ba6818bb57b6f39898133d9c182e80793.scope - libcontainer container 80c4c90fe778cc38a3d9a683ec9e909ba6818bb57b6f39898133d9c182e80793. Mar 7 01:17:39.490177 containerd[1713]: time="2026-03-07T01:17:39.490092070Z" level=info msg="StartContainer for \"80c4c90fe778cc38a3d9a683ec9e909ba6818bb57b6f39898133d9c182e80793\" returns successfully" Mar 7 01:17:40.092644 kubelet[3183]: I0307 01:17:40.092577 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f799dc74-nnsmh" podStartSLOduration=1.630391353 podStartE2EDuration="5.092556044s" podCreationTimestamp="2026-03-07 01:17:35 +0000 UTC" firstStartedPulling="2026-03-07 01:17:35.895207636 +0000 UTC m=+49.207025135" lastFinishedPulling="2026-03-07 01:17:39.357372327 +0000 UTC m=+52.669189826" observedRunningTime="2026-03-07 01:17:40.092459742 +0000 UTC m=+53.404277241" watchObservedRunningTime="2026-03-07 01:17:40.092556044 +0000 UTC m=+53.404373543" Mar 7 01:17:43.797255 containerd[1713]: time="2026-03-07T01:17:43.796795673Z" level=info msg="StopPodSandbox for \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\"" Mar 7 01:17:43.797823 containerd[1713]: time="2026-03-07T01:17:43.797587185Z" level=info msg="StopPodSandbox for \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\"" Mar 7 01:17:43.800557 containerd[1713]: time="2026-03-07T01:17:43.800518930Z" level=info msg="StopPodSandbox for \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\"" Mar 7 01:17:43.801177 containerd[1713]: time="2026-03-07T01:17:43.801000438Z" level=info msg="StopPodSandbox for \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\"" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.921 [INFO][4924] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.923 [INFO][4924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" iface="eth0" netns="/var/run/netns/cni-268244b4-d46d-bccd-7b98-6b9dfeb536a0" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.923 [INFO][4924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" iface="eth0" netns="/var/run/netns/cni-268244b4-d46d-bccd-7b98-6b9dfeb536a0" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.923 [INFO][4924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" iface="eth0" netns="/var/run/netns/cni-268244b4-d46d-bccd-7b98-6b9dfeb536a0" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.923 [INFO][4924] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.924 [INFO][4924] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.995 [INFO][4953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.996 [INFO][4953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:43.996 [INFO][4953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:44.014 [WARNING][4953] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:44.014 [INFO][4953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:44.016 [INFO][4953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.022665 containerd[1713]: 2026-03-07 01:17:44.019 [INFO][4924] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:44.028177 containerd[1713]: time="2026-03-07T01:17:44.023948871Z" level=info msg="TearDown network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\" successfully" Mar 7 01:17:44.028177 containerd[1713]: time="2026-03-07T01:17:44.023992972Z" level=info msg="StopPodSandbox for \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\" returns successfully" Mar 7 01:17:44.028177 containerd[1713]: time="2026-03-07T01:17:44.026823315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clwb7,Uid:202162ba-341e-473d-92ac-5b724859334f,Namespace:kube-system,Attempt:1,}" Mar 7 01:17:44.029603 systemd[1]: run-netns-cni\x2d268244b4\x2dd46d\x2dbccd\x2d7b98\x2d6b9dfeb536a0.mount: Deactivated successfully. Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:43.938 [INFO][4928] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:43.939 [INFO][4928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" iface="eth0" netns="/var/run/netns/cni-6e9eff4f-0ac3-d26b-1572-9b9862cc5d81" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:43.939 [INFO][4928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" iface="eth0" netns="/var/run/netns/cni-6e9eff4f-0ac3-d26b-1572-9b9862cc5d81" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:43.940 [INFO][4928] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" iface="eth0" netns="/var/run/netns/cni-6e9eff4f-0ac3-d26b-1572-9b9862cc5d81" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:43.940 [INFO][4928] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:43.940 [INFO][4928] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:44.011 [INFO][4958] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:44.012 [INFO][4958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:44.016 [INFO][4958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:44.028 [WARNING][4958] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:44.028 [INFO][4958] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:44.031 [INFO][4958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.037188 containerd[1713]: 2026-03-07 01:17:44.034 [INFO][4928] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:44.039571 containerd[1713]: time="2026-03-07T01:17:44.039502711Z" level=info msg="TearDown network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\" successfully" Mar 7 01:17:44.039787 containerd[1713]: time="2026-03-07T01:17:44.039670213Z" level=info msg="StopPodSandbox for \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\" returns successfully" Mar 7 01:17:44.041244 systemd[1]: run-netns-cni\x2d6e9eff4f\x2d0ac3\x2dd26b\x2d1572\x2d9b9862cc5d81.mount: Deactivated successfully. Mar 7 01:17:44.043394 containerd[1713]: time="2026-03-07T01:17:44.042865062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q5ltk,Uid:d30623bf-c631-4b94-bf09-b8c8c11932cf,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:43.940 [INFO][4927] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:43.940 [INFO][4927] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" iface="eth0" netns="/var/run/netns/cni-983b78d0-a2c8-db6c-8548-3aac27de3b21" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:43.942 [INFO][4927] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" iface="eth0" netns="/var/run/netns/cni-983b78d0-a2c8-db6c-8548-3aac27de3b21" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:43.947 [INFO][4927] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" iface="eth0" netns="/var/run/netns/cni-983b78d0-a2c8-db6c-8548-3aac27de3b21" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:43.947 [INFO][4927] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:43.947 [INFO][4927] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:44.014 [INFO][4960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:44.014 [INFO][4960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:44.031 [INFO][4960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:44.045 [WARNING][4960] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:44.045 [INFO][4960] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:44.046 [INFO][4960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.054413 containerd[1713]: 2026-03-07 01:17:44.049 [INFO][4927] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:44.056286 containerd[1713]: time="2026-03-07T01:17:44.056240968Z" level=info msg="TearDown network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\" successfully" Mar 7 01:17:44.059652 containerd[1713]: time="2026-03-07T01:17:44.058185798Z" level=info msg="StopPodSandbox for \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\" returns successfully" Mar 7 01:17:44.060205 containerd[1713]: time="2026-03-07T01:17:44.060115628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhgml,Uid:8ce69653-849d-4201-90d6-36652ad2c82f,Namespace:kube-system,Attempt:1,}" Mar 7 01:17:44.063070 systemd[1]: run-netns-cni\x2d983b78d0\x2da2c8\x2ddb6c\x2d8548\x2d3aac27de3b21.mount: Deactivated successfully. Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:43.946 [INFO][4926] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:43.946 [INFO][4926] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" iface="eth0" netns="/var/run/netns/cni-864dd631-7de6-7e00-2bae-5682eefa6f23" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:43.947 [INFO][4926] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" iface="eth0" netns="/var/run/netns/cni-864dd631-7de6-7e00-2bae-5682eefa6f23" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:43.948 [INFO][4926] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" iface="eth0" netns="/var/run/netns/cni-864dd631-7de6-7e00-2bae-5682eefa6f23" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:43.948 [INFO][4926] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:43.949 [INFO][4926] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:44.016 [INFO][4965] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:44.017 [INFO][4965] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:44.046 [INFO][4965] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:44.053 [WARNING][4965] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:44.053 [INFO][4965] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:44.059 [INFO][4965] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.064285 containerd[1713]: 2026-03-07 01:17:44.061 [INFO][4926] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:44.066520 containerd[1713]: time="2026-03-07T01:17:44.064800200Z" level=info msg="TearDown network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\" successfully" Mar 7 01:17:44.066520 containerd[1713]: time="2026-03-07T01:17:44.064823401Z" level=info msg="StopPodSandbox for \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\" returns successfully" Mar 7 01:17:44.066520 containerd[1713]: time="2026-03-07T01:17:44.065574212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd94b965c-xvz8c,Uid:a95025c4-1bc2-4e58-8c10-f597fc15642d,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:44.069436 systemd[1]: run-netns-cni\x2d864dd631\x2d7de6\x2d7e00\x2d2bae\x2d5682eefa6f23.mount: Deactivated successfully. Mar 7 01:17:44.287600 systemd-networkd[1579]: calida7ad7a45d0: Link UP Mar 7 01:17:44.290102 systemd-networkd[1579]: calida7ad7a45d0: Gained carrier Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.130 [INFO][4981] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0 coredns-674b8bbfcf- kube-system 202162ba-341e-473d-92ac-5b724859334f 967 0 2026-03-07 01:16:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 coredns-674b8bbfcf-clwb7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida7ad7a45d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.130 [INFO][4981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.191 [INFO][4993] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" HandleID="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.211 [INFO][4993] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" HandleID="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277400), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"coredns-674b8bbfcf-clwb7", "timestamp":"2026-03-07 01:17:44.191448351 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f2dc0)} Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.211 [INFO][4993] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.212 [INFO][4993] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.212 [INFO][4993] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.218 [INFO][4993] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.224 [INFO][4993] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.235 [INFO][4993] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.241 [INFO][4993] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.248 [INFO][4993] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.248 [INFO][4993] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.250 [INFO][4993] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18 Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.259 [INFO][4993] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.273 [INFO][4993] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.130/26] block=192.168.74.128/26 handle="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.273 [INFO][4993] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.130/26] handle="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.273 [INFO][4993] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.325382 containerd[1713]: 2026-03-07 01:17:44.273 [INFO][4993] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.130/26] IPv6=[] ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" HandleID="k8s-pod-network.9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.327461 containerd[1713]: 2026-03-07 01:17:44.281 [INFO][4981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"202162ba-341e-473d-92ac-5b724859334f", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"coredns-674b8bbfcf-clwb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida7ad7a45d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.327461 containerd[1713]: 2026-03-07 01:17:44.282 [INFO][4981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.130/32] ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.327461 containerd[1713]: 2026-03-07 01:17:44.282 [INFO][4981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida7ad7a45d0 ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.327461 containerd[1713]: 2026-03-07 01:17:44.294 [INFO][4981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.327461 containerd[1713]: 2026-03-07 01:17:44.295 [INFO][4981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"202162ba-341e-473d-92ac-5b724859334f", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18", Pod:"coredns-674b8bbfcf-clwb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida7ad7a45d0", MAC:"32:70:9f:1b:38:57", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.327461 containerd[1713]: 2026-03-07 01:17:44.316 [INFO][4981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18" Namespace="kube-system" Pod="coredns-674b8bbfcf-clwb7" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:44.402261 containerd[1713]: time="2026-03-07T01:17:44.401306882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:44.402261 containerd[1713]: time="2026-03-07T01:17:44.401377183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:44.402261 containerd[1713]: time="2026-03-07T01:17:44.401416784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.402261 containerd[1713]: time="2026-03-07T01:17:44.401531186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.413368 systemd-networkd[1579]: cali50f01edcc70: Link UP Mar 7 01:17:44.413641 systemd-networkd[1579]: cali50f01edcc70: Gained carrier Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.219 [INFO][4997] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0 goldmane-5b85766d88- calico-system d30623bf-c631-4b94-bf09-b8c8c11932cf 968 0 2026-03-07 01:17:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 goldmane-5b85766d88-q5ltk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali50f01edcc70 [] [] }} ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.220 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.302 [INFO][5032] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" HandleID="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.329 [INFO][5032] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" HandleID="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"goldmane-5b85766d88-q5ltk", "timestamp":"2026-03-07 01:17:44.302825466 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001142c0)} Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.329 [INFO][5032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.329 [INFO][5032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.329 [INFO][5032] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.340 [INFO][5032] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.349 [INFO][5032] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.358 [INFO][5032] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.361 [INFO][5032] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.364 [INFO][5032] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.364 [INFO][5032] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.367 [INFO][5032] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864 Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.382 [INFO][5032] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.397 [INFO][5032] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.131/26] block=192.168.74.128/26 handle="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.398 [INFO][5032] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.131/26] handle="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.398 [INFO][5032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.439641 containerd[1713]: 2026-03-07 01:17:44.398 [INFO][5032] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.131/26] IPv6=[] ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" HandleID="k8s-pod-network.cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.443001 containerd[1713]: 2026-03-07 01:17:44.407 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d30623bf-c631-4b94-bf09-b8c8c11932cf", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"goldmane-5b85766d88-q5ltk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50f01edcc70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.443001 containerd[1713]: 2026-03-07 01:17:44.408 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.131/32] ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.443001 containerd[1713]: 2026-03-07 01:17:44.408 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50f01edcc70 ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.443001 containerd[1713]: 2026-03-07 01:17:44.413 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.443001 containerd[1713]: 2026-03-07 01:17:44.413 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d30623bf-c631-4b94-bf09-b8c8c11932cf", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864", Pod:"goldmane-5b85766d88-q5ltk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50f01edcc70", MAC:"6e:26:79:cc:e5:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.443001 containerd[1713]: 2026-03-07 01:17:44.436 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864" Namespace="calico-system" Pod="goldmane-5b85766d88-q5ltk" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:44.447711 systemd[1]: Started cri-containerd-9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18.scope - libcontainer container 9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18. Mar 7 01:17:44.485917 containerd[1713]: time="2026-03-07T01:17:44.485400477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:44.485917 containerd[1713]: time="2026-03-07T01:17:44.485467978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:44.485917 containerd[1713]: time="2026-03-07T01:17:44.485489779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.485917 containerd[1713]: time="2026-03-07T01:17:44.485576080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.514979 systemd[1]: Started cri-containerd-cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864.scope - libcontainer container cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864. Mar 7 01:17:44.516940 systemd-networkd[1579]: calie8ee418e2a2: Link UP Mar 7 01:17:44.517170 systemd-networkd[1579]: calie8ee418e2a2: Gained carrier Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.256 [INFO][5013] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0 calico-kube-controllers-7fd94b965c- calico-system a95025c4-1bc2-4e58-8c10-f597fc15642d 970 0 2026-03-07 01:17:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fd94b965c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 calico-kube-controllers-7fd94b965c-xvz8c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8ee418e2a2 [] [] }} ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.256 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.360 [INFO][5040] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" HandleID="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.392 [INFO][5040] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" HandleID="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003783e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"calico-kube-controllers-7fd94b965c-xvz8c", "timestamp":"2026-03-07 01:17:44.360675957 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000554580)} Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.392 [INFO][5040] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.399 [INFO][5040] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.400 [INFO][5040] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.439 [INFO][5040] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.449 [INFO][5040] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.460 [INFO][5040] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.466 [INFO][5040] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.471 [INFO][5040] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.471 [INFO][5040] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.476 [INFO][5040] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152 Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.487 [INFO][5040] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.502 [INFO][5040] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.132/26] block=192.168.74.128/26 handle="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.502 [INFO][5040] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.132/26] handle="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.502 [INFO][5040] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.552632 containerd[1713]: 2026-03-07 01:17:44.502 [INFO][5040] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.132/26] IPv6=[] ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" HandleID="k8s-pod-network.2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.554932 containerd[1713]: 2026-03-07 01:17:44.509 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0", GenerateName:"calico-kube-controllers-7fd94b965c-", Namespace:"calico-system", SelfLink:"", UID:"a95025c4-1bc2-4e58-8c10-f597fc15642d", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd94b965c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"calico-kube-controllers-7fd94b965c-xvz8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ee418e2a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.554932 containerd[1713]: 2026-03-07 01:17:44.509 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.132/32] ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.554932 containerd[1713]: 2026-03-07 01:17:44.509 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8ee418e2a2 ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.554932 containerd[1713]: 2026-03-07 01:17:44.518 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.554932 containerd[1713]: 2026-03-07 01:17:44.520 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0", GenerateName:"calico-kube-controllers-7fd94b965c-", Namespace:"calico-system", SelfLink:"", UID:"a95025c4-1bc2-4e58-8c10-f597fc15642d", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd94b965c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152", Pod:"calico-kube-controllers-7fd94b965c-xvz8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ee418e2a2", MAC:"c6:78:9b:5b:ff:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.554932 containerd[1713]: 2026-03-07 01:17:44.550 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152" Namespace="calico-system" Pod="calico-kube-controllers-7fd94b965c-xvz8c" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:44.603893 containerd[1713]: time="2026-03-07T01:17:44.603848202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-clwb7,Uid:202162ba-341e-473d-92ac-5b724859334f,Namespace:kube-system,Attempt:1,} returns sandbox id \"9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18\"" Mar 7 01:17:44.617415 containerd[1713]: time="2026-03-07T01:17:44.617290609Z" level=info msg="CreateContainer within sandbox \"9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:17:44.619217 containerd[1713]: time="2026-03-07T01:17:44.618854733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:44.619217 containerd[1713]: time="2026-03-07T01:17:44.618941434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:44.619217 containerd[1713]: time="2026-03-07T01:17:44.618962334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.619465 containerd[1713]: time="2026-03-07T01:17:44.619236339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.639757 systemd-networkd[1579]: cali067b8f892d5: Link UP Mar 7 01:17:44.641080 systemd-networkd[1579]: cali067b8f892d5: Gained carrier Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.258 [INFO][5008] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0 coredns-674b8bbfcf- kube-system 8ce69653-849d-4201-90d6-36652ad2c82f 969 0 2026-03-07 01:16:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 coredns-674b8bbfcf-dhgml eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali067b8f892d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.264 [INFO][5008] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.392 [INFO][5046] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" HandleID="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.408 [INFO][5046] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" HandleID="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"coredns-674b8bbfcf-dhgml", "timestamp":"2026-03-07 01:17:44.39269855 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001146e0)} Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.408 [INFO][5046] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.508 [INFO][5046] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.508 [INFO][5046] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.548 [INFO][5046] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.559 [INFO][5046] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.566 [INFO][5046] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.572 [INFO][5046] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.579 [INFO][5046] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.579 [INFO][5046] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.586 [INFO][5046] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610 Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.613 [INFO][5046] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.629 [INFO][5046] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.133/26] block=192.168.74.128/26 handle="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.629 [INFO][5046] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.133/26] handle="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.629 [INFO][5046] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:44.669655 containerd[1713]: 2026-03-07 01:17:44.629 [INFO][5046] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.133/26] IPv6=[] ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" HandleID="k8s-pod-network.f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.673885 containerd[1713]: 2026-03-07 01:17:44.633 [INFO][5008] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8ce69653-849d-4201-90d6-36652ad2c82f", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"coredns-674b8bbfcf-dhgml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali067b8f892d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.673885 containerd[1713]: 2026-03-07 01:17:44.634 [INFO][5008] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.133/32] ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.673885 containerd[1713]: 2026-03-07 01:17:44.634 [INFO][5008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali067b8f892d5 ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.673885 containerd[1713]: 2026-03-07 01:17:44.639 [INFO][5008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.673885 containerd[1713]: 2026-03-07 01:17:44.639 [INFO][5008] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8ce69653-849d-4201-90d6-36652ad2c82f", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610", Pod:"coredns-674b8bbfcf-dhgml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali067b8f892d5", MAC:"ae:72:72:76:4e:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:44.673885 containerd[1713]: 2026-03-07 01:17:44.662 [INFO][5008] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhgml" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:44.674594 systemd[1]: Started cri-containerd-2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152.scope - libcontainer container 2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152. Mar 7 01:17:44.684376 containerd[1713]: time="2026-03-07T01:17:44.682855118Z" level=info msg="CreateContainer within sandbox \"9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b064a0def6fc8318042bcfa06023e9ab832f262f5f8e1c9968825d7040c884c3\"" Mar 7 01:17:44.686734 containerd[1713]: time="2026-03-07T01:17:44.685212055Z" level=info msg="StartContainer for \"b064a0def6fc8318042bcfa06023e9ab832f262f5f8e1c9968825d7040c884c3\"" Mar 7 01:17:44.703698 containerd[1713]: time="2026-03-07T01:17:44.703490336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q5ltk,Uid:d30623bf-c631-4b94-bf09-b8c8c11932cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864\"" Mar 7 01:17:44.709908 containerd[1713]: time="2026-03-07T01:17:44.709880634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:17:44.767365 systemd[1]: Started cri-containerd-b064a0def6fc8318042bcfa06023e9ab832f262f5f8e1c9968825d7040c884c3.scope - libcontainer container b064a0def6fc8318042bcfa06023e9ab832f262f5f8e1c9968825d7040c884c3. Mar 7 01:17:44.779355 containerd[1713]: time="2026-03-07T01:17:44.774382728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:44.779355 containerd[1713]: time="2026-03-07T01:17:44.774444329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:44.779355 containerd[1713]: time="2026-03-07T01:17:44.774465529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.779355 containerd[1713]: time="2026-03-07T01:17:44.774551230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.815019 systemd[1]: Started cri-containerd-f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610.scope - libcontainer container f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610. Mar 7 01:17:44.855627 containerd[1713]: time="2026-03-07T01:17:44.855503177Z" level=info msg="StartContainer for \"b064a0def6fc8318042bcfa06023e9ab832f262f5f8e1c9968825d7040c884c3\" returns successfully" Mar 7 01:17:44.878540 containerd[1713]: time="2026-03-07T01:17:44.878399330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd94b965c-xvz8c,Uid:a95025c4-1bc2-4e58-8c10-f597fc15642d,Namespace:calico-system,Attempt:1,} returns sandbox id \"2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152\"" Mar 7 01:17:44.898086 containerd[1713]: time="2026-03-07T01:17:44.897962231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhgml,Uid:8ce69653-849d-4201-90d6-36652ad2c82f,Namespace:kube-system,Attempt:1,} returns sandbox id \"f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610\"" Mar 7 01:17:44.908938 containerd[1713]: time="2026-03-07T01:17:44.908899899Z" level=info msg="CreateContainer within sandbox \"f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:17:44.947960 containerd[1713]: time="2026-03-07T01:17:44.947907300Z" level=info msg="CreateContainer within sandbox \"f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ae05e71e72483243ac307bcdebddcbf73a94df6fef344ccf0a8a57be3ed97c3\"" Mar 7 01:17:44.948837 containerd[1713]: time="2026-03-07T01:17:44.948802114Z" level=info msg="StartContainer for \"2ae05e71e72483243ac307bcdebddcbf73a94df6fef344ccf0a8a57be3ed97c3\"" Mar 7 01:17:44.982398 systemd[1]: Started cri-containerd-2ae05e71e72483243ac307bcdebddcbf73a94df6fef344ccf0a8a57be3ed97c3.scope - libcontainer container 2ae05e71e72483243ac307bcdebddcbf73a94df6fef344ccf0a8a57be3ed97c3. Mar 7 01:17:45.022515 containerd[1713]: time="2026-03-07T01:17:45.020016111Z" level=info msg="StartContainer for \"2ae05e71e72483243ac307bcdebddcbf73a94df6fef344ccf0a8a57be3ed97c3\" returns successfully" Mar 7 01:17:45.112389 kubelet[3183]: I0307 01:17:45.112228 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-clwb7" podStartSLOduration=52.11220743 podStartE2EDuration="52.11220743s" podCreationTimestamp="2026-03-07 01:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:45.112046628 +0000 UTC m=+58.423864227" watchObservedRunningTime="2026-03-07 01:17:45.11220743 +0000 UTC m=+58.424024929" Mar 7 01:17:45.549346 systemd-networkd[1579]: calida7ad7a45d0: Gained IPv6LL Mar 7 01:17:45.796637 containerd[1713]: time="2026-03-07T01:17:45.796590270Z" level=info msg="StopPodSandbox for \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\"" Mar 7 01:17:45.797420 containerd[1713]: time="2026-03-07T01:17:45.796888574Z" level=info msg="StopPodSandbox for \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\"" Mar 7 01:17:45.805867 systemd-networkd[1579]: calie8ee418e2a2: Gained IPv6LL Mar 7 01:17:45.871075 systemd-networkd[1579]: cali50f01edcc70: Gained IPv6LL Mar 7 01:17:45.913251 kubelet[3183]: I0307 01:17:45.913183 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dhgml" podStartSLOduration=52.913145065 podStartE2EDuration="52.913145065s" podCreationTimestamp="2026-03-07 01:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:45.163700623 +0000 UTC m=+58.475518222" watchObservedRunningTime="2026-03-07 01:17:45.913145065 +0000 UTC m=+59.224977164" Mar 7 01:17:45.998598 systemd-networkd[1579]: cali067b8f892d5: Gained IPv6LL Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:45.909 [INFO][5378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:45.911 [INFO][5378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" iface="eth0" netns="/var/run/netns/cni-f5b85f59-59e9-5cb9-4550-4555181086c8" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:45.911 [INFO][5378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" iface="eth0" netns="/var/run/netns/cni-f5b85f59-59e9-5cb9-4550-4555181086c8" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:45.911 [INFO][5378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" iface="eth0" netns="/var/run/netns/cni-f5b85f59-59e9-5cb9-4550-4555181086c8" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:45.911 [INFO][5378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:45.911 [INFO][5378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:45.999 [INFO][5396] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:46.001 [INFO][5396] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:46.001 [INFO][5396] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:46.028 [WARNING][5396] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:46.028 [INFO][5396] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:46.035 [INFO][5396] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:46.052417 containerd[1713]: 2026-03-07 01:17:46.043 [INFO][5378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:17:46.062112 containerd[1713]: time="2026-03-07T01:17:46.057400786Z" level=info msg="TearDown network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\" successfully" Mar 7 01:17:46.062112 containerd[1713]: time="2026-03-07T01:17:46.061126244Z" level=info msg="StopPodSandbox for \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\" returns successfully" Mar 7 01:17:46.062754 systemd[1]: run-netns-cni\x2df5b85f59\x2d59e9\x2d5cb9\x2d4550\x2d4555181086c8.mount: Deactivated successfully. Mar 7 01:17:46.074020 containerd[1713]: time="2026-03-07T01:17:46.073484234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms76b,Uid:422fccbd-8b44-46d1-b23a-b122dabbbb7c,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:45.919 [INFO][5377] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:45.919 [INFO][5377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" iface="eth0" netns="/var/run/netns/cni-98d82a38-c58b-08fb-5f62-78e381c2e1a4" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:45.920 [INFO][5377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" iface="eth0" netns="/var/run/netns/cni-98d82a38-c58b-08fb-5f62-78e381c2e1a4" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:45.920 [INFO][5377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" iface="eth0" netns="/var/run/netns/cni-98d82a38-c58b-08fb-5f62-78e381c2e1a4" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:45.920 [INFO][5377] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:45.920 [INFO][5377] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:46.033 [INFO][5398] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:46.033 [INFO][5398] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:46.035 [INFO][5398] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:46.052 [WARNING][5398] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:46.057 [INFO][5398] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:46.065 [INFO][5398] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:46.084311 containerd[1713]: 2026-03-07 01:17:46.070 [INFO][5377] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:17:46.089739 containerd[1713]: time="2026-03-07T01:17:46.084511404Z" level=info msg="TearDown network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\" successfully" Mar 7 01:17:46.089739 containerd[1713]: time="2026-03-07T01:17:46.084541304Z" level=info msg="StopPodSandbox for \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\" returns successfully" Mar 7 01:17:46.094185 systemd[1]: run-netns-cni\x2d98d82a38\x2dc58b\x2d08fb\x2d5f62\x2d78e381c2e1a4.mount: Deactivated successfully. Mar 7 01:17:46.105546 containerd[1713]: time="2026-03-07T01:17:46.105481927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-hhwfv,Uid:b5b460bf-5f22-4761-beda-066ecd458571,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:46.465689 systemd-networkd[1579]: cali924d4b8cb60: Link UP Mar 7 01:17:46.465893 systemd-networkd[1579]: cali924d4b8cb60: Gained carrier Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.301 [INFO][5416] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0 csi-node-driver- calico-system 422fccbd-8b44-46d1-b23a-b122dabbbb7c 1011 0 2026-03-07 01:17:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 csi-node-driver-ms76b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali924d4b8cb60 [] [] }} ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.302 [INFO][5416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.407 [INFO][5444] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" HandleID="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.422 [INFO][5444] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" HandleID="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"csi-node-driver-ms76b", "timestamp":"2026-03-07 01:17:46.407096672 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000256580)} Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.423 [INFO][5444] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.423 [INFO][5444] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.423 [INFO][5444] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.425 [INFO][5444] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.430 [INFO][5444] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.437 [INFO][5444] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.439 [INFO][5444] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.441 [INFO][5444] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.441 [INFO][5444] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.443 [INFO][5444] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24 Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.447 [INFO][5444] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.459 [INFO][5444] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.134/26] block=192.168.74.128/26 handle="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.459 [INFO][5444] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.134/26] handle="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.459 [INFO][5444] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:46.484242 containerd[1713]: 2026-03-07 01:17:46.459 [INFO][5444] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.134/26] IPv6=[] ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" HandleID="k8s-pod-network.be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.485176 containerd[1713]: 2026-03-07 01:17:46.461 [INFO][5416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"422fccbd-8b44-46d1-b23a-b122dabbbb7c", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"csi-node-driver-ms76b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali924d4b8cb60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:46.485176 containerd[1713]: 2026-03-07 01:17:46.461 [INFO][5416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.134/32] ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.485176 containerd[1713]: 2026-03-07 01:17:46.461 [INFO][5416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali924d4b8cb60 ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.485176 containerd[1713]: 2026-03-07 01:17:46.464 [INFO][5416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.485176 containerd[1713]: 2026-03-07 01:17:46.466 [INFO][5416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"422fccbd-8b44-46d1-b23a-b122dabbbb7c", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24", Pod:"csi-node-driver-ms76b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali924d4b8cb60", MAC:"6e:fe:47:45:94:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:46.485176 containerd[1713]: 2026-03-07 01:17:46.480 [INFO][5416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24" Namespace="calico-system" Pod="csi-node-driver-ms76b" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:17:46.563822 systemd-networkd[1579]: calic9f1800d5a6: Link UP Mar 7 01:17:46.564091 systemd-networkd[1579]: calic9f1800d5a6: Gained carrier Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.364 [INFO][5425] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0 calico-apiserver-6b85bc4966- calico-system b5b460bf-5f22-4761-beda-066ecd458571 1012 0 2026-03-07 01:17:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b85bc4966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 calico-apiserver-6b85bc4966-hhwfv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic9f1800d5a6 [] [] }} ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.364 [INFO][5425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.429 [INFO][5456] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" HandleID="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.437 [INFO][5456] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" HandleID="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00017a6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"calico-apiserver-6b85bc4966-hhwfv", "timestamp":"2026-03-07 01:17:46.429930723 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001926e0)} Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.438 [INFO][5456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.459 [INFO][5456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.460 [INFO][5456] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.526 [INFO][5456] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.531 [INFO][5456] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.538 [INFO][5456] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.541 [INFO][5456] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.543 [INFO][5456] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.543 [INFO][5456] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.545 [INFO][5456] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991 Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.549 [INFO][5456] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.559 [INFO][5456] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.135/26] block=192.168.74.128/26 handle="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.559 [INFO][5456] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.135/26] handle="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.559 [INFO][5456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:46.591597 containerd[1713]: 2026-03-07 01:17:46.559 [INFO][5456] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.135/26] IPv6=[] ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" HandleID="k8s-pod-network.071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.592569 containerd[1713]: 2026-03-07 01:17:46.560 [INFO][5425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"b5b460bf-5f22-4761-beda-066ecd458571", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"calico-apiserver-6b85bc4966-hhwfv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic9f1800d5a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:46.592569 containerd[1713]: 2026-03-07 01:17:46.560 [INFO][5425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.135/32] ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.592569 containerd[1713]: 2026-03-07 01:17:46.560 [INFO][5425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9f1800d5a6 ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.592569 containerd[1713]: 2026-03-07 01:17:46.566 [INFO][5425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.592569 containerd[1713]: 2026-03-07 01:17:46.570 [INFO][5425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"b5b460bf-5f22-4761-beda-066ecd458571", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991", Pod:"calico-apiserver-6b85bc4966-hhwfv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic9f1800d5a6", MAC:"de:23:41:66:b5:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:46.592569 containerd[1713]: 2026-03-07 01:17:46.587 [INFO][5425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-hhwfv" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:17:46.785553 containerd[1713]: time="2026-03-07T01:17:46.784274080Z" level=info msg="StopPodSandbox for \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\"" Mar 7 01:17:46.793477 containerd[1713]: time="2026-03-07T01:17:46.788222841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:46.793477 containerd[1713]: time="2026-03-07T01:17:46.788282242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:46.793477 containerd[1713]: time="2026-03-07T01:17:46.788317142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:46.793477 containerd[1713]: time="2026-03-07T01:17:46.788409744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:46.801459 containerd[1713]: time="2026-03-07T01:17:46.801098139Z" level=info msg="StopPodSandbox for \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\"" Mar 7 01:17:46.829171 containerd[1713]: time="2026-03-07T01:17:46.826793935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:46.829171 containerd[1713]: time="2026-03-07T01:17:46.826853636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:46.829171 containerd[1713]: time="2026-03-07T01:17:46.826875136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:46.829171 containerd[1713]: time="2026-03-07T01:17:46.826978538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:46.834375 systemd[1]: Started cri-containerd-be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24.scope - libcontainer container be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24. Mar 7 01:17:46.893469 systemd[1]: Started cri-containerd-071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991.scope - libcontainer container 071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991. Mar 7 01:17:46.970438 containerd[1713]: time="2026-03-07T01:17:46.970396946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ms76b,Uid:422fccbd-8b44-46d1-b23a-b122dabbbb7c,Namespace:calico-system,Attempt:1,} returns sandbox id \"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24\"" Mar 7 01:17:47.043101 containerd[1713]: time="2026-03-07T01:17:47.042869162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-hhwfv,Uid:b5b460bf-5f22-4761-beda-066ecd458571,Namespace:calico-system,Attempt:1,} returns sandbox id \"071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991\"" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:46.962 [WARNING][5570] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:46.962 [INFO][5570] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:46.962 [INFO][5570] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" iface="eth0" netns="" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:46.962 [INFO][5570] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:46.962 [INFO][5570] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:47.079 [INFO][5623] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:47.079 [INFO][5623] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:47.079 [INFO][5623] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:47.091 [WARNING][5623] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:47.091 [INFO][5623] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:47.093 [INFO][5623] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:47.097925 containerd[1713]: 2026-03-07 01:17:47.094 [INFO][5570] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.097925 containerd[1713]: time="2026-03-07T01:17:47.097792208Z" level=info msg="TearDown network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\" successfully" Mar 7 01:17:47.097925 containerd[1713]: time="2026-03-07T01:17:47.097821609Z" level=info msg="StopPodSandbox for \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\" returns successfully" Mar 7 01:17:47.100046 containerd[1713]: time="2026-03-07T01:17:47.099431034Z" level=info msg="RemovePodSandbox for \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\"" Mar 7 01:17:47.100046 containerd[1713]: time="2026-03-07T01:17:47.099467134Z" level=info msg="Forcibly stopping sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\"" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:46.964 [INFO][5577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:46.964 [INFO][5577] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" iface="eth0" netns="/var/run/netns/cni-f81417c5-bb11-13f1-b093-95fa819a4966" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:46.964 [INFO][5577] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" iface="eth0" netns="/var/run/netns/cni-f81417c5-bb11-13f1-b093-95fa819a4966" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:46.965 [INFO][5577] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" iface="eth0" netns="/var/run/netns/cni-f81417c5-bb11-13f1-b093-95fa819a4966" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:46.965 [INFO][5577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:46.965 [INFO][5577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:47.080 [INFO][5622] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:47.081 [INFO][5622] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:47.094 [INFO][5622] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:47.110 [WARNING][5622] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:47.110 [INFO][5622] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:47.114 [INFO][5622] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:47.120378 containerd[1713]: 2026-03-07 01:17:47.117 [INFO][5577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:17:47.125978 containerd[1713]: time="2026-03-07T01:17:47.125405534Z" level=info msg="TearDown network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\" successfully" Mar 7 01:17:47.126238 containerd[1713]: time="2026-03-07T01:17:47.126122545Z" level=info msg="StopPodSandbox for \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\" returns successfully" Mar 7 01:17:47.127415 containerd[1713]: time="2026-03-07T01:17:47.127215461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-7842l,Uid:5a80d881-150a-48f3-9427-5627e9d0bd10,Namespace:calico-system,Attempt:1,}" Mar 7 01:17:47.134394 systemd[1]: run-netns-cni\x2df81417c5\x2dbb11\x2d13f1\x2db093\x2d95fa819a4966.mount: Deactivated successfully. Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.187 [WARNING][5650] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.187 [INFO][5650] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.187 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" iface="eth0" netns="" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.187 [INFO][5650] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.187 [INFO][5650] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.268 [INFO][5657] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.268 [INFO][5657] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.269 [INFO][5657] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.284 [WARNING][5657] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.284 [INFO][5657] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" HandleID="k8s-pod-network.a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-whisker--b9f656bd5--x4s4h-eth0" Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.286 [INFO][5657] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:47.298252 containerd[1713]: 2026-03-07 01:17:47.291 [INFO][5650] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a" Mar 7 01:17:47.298252 containerd[1713]: time="2026-03-07T01:17:47.296786473Z" level=info msg="TearDown network for sandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\" successfully" Mar 7 01:17:47.316305 containerd[1713]: time="2026-03-07T01:17:47.316051670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:47.316652 containerd[1713]: time="2026-03-07T01:17:47.316519477Z" level=info msg="RemovePodSandbox \"a92640467f19220b47121a0e29c9990a09fbb6ed63a3558a1b72dd21af367f4a\" returns successfully" Mar 7 01:17:47.317737 containerd[1713]: time="2026-03-07T01:17:47.317707295Z" level=info msg="StopPodSandbox for \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\"" Mar 7 01:17:47.472546 systemd-networkd[1579]: cali11f9aed2106: Link UP Mar 7 01:17:47.474314 systemd-networkd[1579]: cali11f9aed2106: Gained carrier Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.310 [INFO][5662] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0 calico-apiserver-6b85bc4966- calico-system 5a80d881-150a-48f3-9427-5627e9d0bd10 1022 0 2026-03-07 01:17:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b85bc4966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-baf9cf72b8 calico-apiserver-6b85bc4966-7842l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali11f9aed2106 [] [] }} ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.310 [INFO][5662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.387 [INFO][5676] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" HandleID="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.400 [INFO][5676] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" HandleID="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-baf9cf72b8", "pod":"calico-apiserver-6b85bc4966-7842l", "timestamp":"2026-03-07 01:17:47.387933676 +0000 UTC"}, Hostname:"ci-4081.3.6-n-baf9cf72b8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000548b00)} Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.400 [INFO][5676] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.400 [INFO][5676] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.400 [INFO][5676] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-baf9cf72b8' Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.403 [INFO][5676] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.410 [INFO][5676] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.424 [INFO][5676] ipam/ipam.go 526: Trying affinity for 192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.426 [INFO][5676] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.430 [INFO][5676] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.430 [INFO][5676] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.433 [INFO][5676] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330 Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.441 [INFO][5676] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.459 [INFO][5676] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.136/26] block=192.168.74.128/26 handle="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.459 [INFO][5676] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.136/26] handle="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" host="ci-4081.3.6-n-baf9cf72b8" Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.459 [INFO][5676] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:47.516580 containerd[1713]: 2026-03-07 01:17:47.459 [INFO][5676] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.136/26] IPv6=[] ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" HandleID="k8s-pod-network.1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.519397 containerd[1713]: 2026-03-07 01:17:47.463 [INFO][5662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"5a80d881-150a-48f3-9427-5627e9d0bd10", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"", Pod:"calico-apiserver-6b85bc4966-7842l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali11f9aed2106", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:47.519397 containerd[1713]: 2026-03-07 01:17:47.463 [INFO][5662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.136/32] ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.519397 containerd[1713]: 2026-03-07 01:17:47.463 [INFO][5662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11f9aed2106 ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.519397 containerd[1713]: 2026-03-07 01:17:47.475 [INFO][5662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.519397 containerd[1713]: 2026-03-07 01:17:47.476 [INFO][5662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"5a80d881-150a-48f3-9427-5627e9d0bd10", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330", Pod:"calico-apiserver-6b85bc4966-7842l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali11f9aed2106", MAC:"ae:53:64:af:25:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:47.519397 containerd[1713]: 2026-03-07 01:17:47.513 [INFO][5662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330" Namespace="calico-system" Pod="calico-apiserver-6b85bc4966-7842l" WorkloadEndpoint="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.414 [WARNING][5688] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8ce69653-849d-4201-90d6-36652ad2c82f", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610", Pod:"coredns-674b8bbfcf-dhgml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali067b8f892d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.415 [INFO][5688] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.415 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" iface="eth0" netns="" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.415 [INFO][5688] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.415 [INFO][5688] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.484 [INFO][5697] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.484 [INFO][5697] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.484 [INFO][5697] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.503 [WARNING][5697] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.504 [INFO][5697] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.514 [INFO][5697] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:47.527951 containerd[1713]: 2026-03-07 01:17:47.524 [INFO][5688] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.530141 containerd[1713]: time="2026-03-07T01:17:47.527988733Z" level=info msg="TearDown network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\" successfully" Mar 7 01:17:47.530141 containerd[1713]: time="2026-03-07T01:17:47.528015934Z" level=info msg="StopPodSandbox for \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\" returns successfully" Mar 7 01:17:47.531727 containerd[1713]: time="2026-03-07T01:17:47.531685390Z" level=info msg="RemovePodSandbox for \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\"" Mar 7 01:17:47.531811 containerd[1713]: time="2026-03-07T01:17:47.531735291Z" level=info msg="Forcibly stopping sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\"" Mar 7 01:17:47.590433 containerd[1713]: time="2026-03-07T01:17:47.588705168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:47.590433 containerd[1713]: time="2026-03-07T01:17:47.588765969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:47.590433 containerd[1713]: time="2026-03-07T01:17:47.588784170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:47.590433 containerd[1713]: time="2026-03-07T01:17:47.588930572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:47.647666 systemd[1]: Started cri-containerd-1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330.scope - libcontainer container 1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330. Mar 7 01:17:47.760578 containerd[1713]: time="2026-03-07T01:17:47.760332511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b85bc4966-7842l,Uid:5a80d881-150a-48f3-9427-5627e9d0bd10,Namespace:calico-system,Attempt:1,} returns sandbox id \"1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330\"" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.682 [WARNING][5726] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8ce69653-849d-4201-90d6-36652ad2c82f", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"f273e24de9ba1523dcffe663eed98a0d1c59b314e756f3f37c1288a043bbd610", Pod:"coredns-674b8bbfcf-dhgml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali067b8f892d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.683 [INFO][5726] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.683 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" iface="eth0" netns="" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.683 [INFO][5726] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.683 [INFO][5726] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.746 [INFO][5769] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.747 [INFO][5769] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.747 [INFO][5769] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.759 [WARNING][5769] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.759 [INFO][5769] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" HandleID="k8s-pod-network.24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--dhgml-eth0" Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.762 [INFO][5769] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:47.766372 containerd[1713]: 2026-03-07 01:17:47.764 [INFO][5726] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d" Mar 7 01:17:47.767053 containerd[1713]: time="2026-03-07T01:17:47.766406505Z" level=info msg="TearDown network for sandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\" successfully" Mar 7 01:17:47.777952 containerd[1713]: time="2026-03-07T01:17:47.777460475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:47.777952 containerd[1713]: time="2026-03-07T01:17:47.777526976Z" level=info msg="RemovePodSandbox \"24b48cfefaabf93bd0151135a07af977e7ff4a2e2ecbd054a273a1a188036f2d\" returns successfully" Mar 7 01:17:47.779077 containerd[1713]: time="2026-03-07T01:17:47.778777595Z" level=info msg="StopPodSandbox for \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\"" Mar 7 01:17:47.791235 systemd-networkd[1579]: calic9f1800d5a6: Gained IPv6LL Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.865 [WARNING][5789] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"202162ba-341e-473d-92ac-5b724859334f", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18", Pod:"coredns-674b8bbfcf-clwb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida7ad7a45d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.866 [INFO][5789] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.866 [INFO][5789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" iface="eth0" netns="" Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.866 [INFO][5789] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.866 [INFO][5789] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.899 [INFO][5801] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.899 [INFO][5801] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.899 [INFO][5801] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.908 [WARNING][5801] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.908 [INFO][5801] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.911 [INFO][5801] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:47.916656 containerd[1713]: 2026-03-07 01:17:47.914 [INFO][5789] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:47.917652 containerd[1713]: time="2026-03-07T01:17:47.916700720Z" level=info msg="TearDown network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\" successfully" Mar 7 01:17:47.917652 containerd[1713]: time="2026-03-07T01:17:47.916731120Z" level=info msg="StopPodSandbox for \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\" returns successfully" Mar 7 01:17:47.918157 containerd[1713]: time="2026-03-07T01:17:47.917849237Z" level=info msg="RemovePodSandbox for \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\"" Mar 7 01:17:47.918157 containerd[1713]: time="2026-03-07T01:17:47.917886138Z" level=info msg="Forcibly stopping sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\"" Mar 7 01:17:47.981848 systemd-networkd[1579]: cali924d4b8cb60: Gained IPv6LL Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:47.971 [WARNING][5815] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"202162ba-341e-473d-92ac-5b724859334f", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 16, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"9d81d11bd3e4427d3ef871b5088d9525b83d19e9825e541401486abf451dca18", Pod:"coredns-674b8bbfcf-clwb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida7ad7a45d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:47.972 [INFO][5815] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:47.972 [INFO][5815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" iface="eth0" netns="" Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:47.972 [INFO][5815] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:47.972 [INFO][5815] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:48.012 [INFO][5822] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:48.012 [INFO][5822] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:48.012 [INFO][5822] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:48.021 [WARNING][5822] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:48.021 [INFO][5822] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" HandleID="k8s-pod-network.b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-coredns--674b8bbfcf--clwb7-eth0" Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:48.023 [INFO][5822] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:48.032023 containerd[1713]: 2026-03-07 01:17:48.026 [INFO][5815] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c" Mar 7 01:17:48.034169 containerd[1713]: time="2026-03-07T01:17:48.032756407Z" level=info msg="TearDown network for sandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\" successfully" Mar 7 01:17:48.047065 containerd[1713]: time="2026-03-07T01:17:48.047027427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:48.047367 containerd[1713]: time="2026-03-07T01:17:48.047345031Z" level=info msg="RemovePodSandbox \"b239196803c442f2f7ff535e8e7f1759f911c58e07408f0d3a74958327e9b61c\" returns successfully" Mar 7 01:17:48.048210 containerd[1713]: time="2026-03-07T01:17:48.048066743Z" level=info msg="StopPodSandbox for \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\"" Mar 7 01:17:48.070666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916276124.mount: Deactivated successfully. Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.102 [WARNING][5841] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d30623bf-c631-4b94-bf09-b8c8c11932cf", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864", Pod:"goldmane-5b85766d88-q5ltk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50f01edcc70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.102 [INFO][5841] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.102 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" iface="eth0" netns="" Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.102 [INFO][5841] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.102 [INFO][5841] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.160 [INFO][5853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.160 [INFO][5853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.160 [INFO][5853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.169 [WARNING][5853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.169 [INFO][5853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.171 [INFO][5853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:48.178324 containerd[1713]: 2026-03-07 01:17:48.174 [INFO][5841] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.180789 containerd[1713]: time="2026-03-07T01:17:48.178889657Z" level=info msg="TearDown network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\" successfully" Mar 7 01:17:48.180789 containerd[1713]: time="2026-03-07T01:17:48.178924858Z" level=info msg="StopPodSandbox for \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\" returns successfully" Mar 7 01:17:48.182649 containerd[1713]: time="2026-03-07T01:17:48.182609115Z" level=info msg="RemovePodSandbox for \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\"" Mar 7 01:17:48.182649 containerd[1713]: time="2026-03-07T01:17:48.182646515Z" level=info msg="Forcibly stopping sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\"" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.234 [WARNING][5868] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d30623bf-c631-4b94-bf09-b8c8c11932cf", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864", Pod:"goldmane-5b85766d88-q5ltk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali50f01edcc70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.235 [INFO][5868] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.235 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" iface="eth0" netns="" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.235 [INFO][5868] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.235 [INFO][5868] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.285 [INFO][5876] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.285 [INFO][5876] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.285 [INFO][5876] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.294 [WARNING][5876] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.294 [INFO][5876] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" HandleID="k8s-pod-network.91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-goldmane--5b85766d88--q5ltk-eth0" Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.296 [INFO][5876] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:48.300661 containerd[1713]: 2026-03-07 01:17:48.298 [INFO][5868] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe" Mar 7 01:17:48.300661 containerd[1713]: time="2026-03-07T01:17:48.300578431Z" level=info msg="TearDown network for sandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\" successfully" Mar 7 01:17:48.315130 containerd[1713]: time="2026-03-07T01:17:48.314753750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:48.315130 containerd[1713]: time="2026-03-07T01:17:48.314831851Z" level=info msg="RemovePodSandbox \"91e8b5929dfa7521455ff5e9299bd3806626dd271b9357a77a251deca6c710fe\" returns successfully" Mar 7 01:17:48.316097 containerd[1713]: time="2026-03-07T01:17:48.315849966Z" level=info msg="StopPodSandbox for \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\"" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.372 [WARNING][5890] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0", GenerateName:"calico-kube-controllers-7fd94b965c-", Namespace:"calico-system", SelfLink:"", UID:"a95025c4-1bc2-4e58-8c10-f597fc15642d", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd94b965c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152", Pod:"calico-kube-controllers-7fd94b965c-xvz8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ee418e2a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.372 [INFO][5890] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.372 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" iface="eth0" netns="" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.372 [INFO][5890] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.372 [INFO][5890] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.407 [INFO][5897] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.407 [INFO][5897] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.407 [INFO][5897] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.415 [WARNING][5897] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.415 [INFO][5897] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.417 [INFO][5897] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:48.422469 containerd[1713]: 2026-03-07 01:17:48.419 [INFO][5890] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.422469 containerd[1713]: time="2026-03-07T01:17:48.421977001Z" level=info msg="TearDown network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\" successfully" Mar 7 01:17:48.422469 containerd[1713]: time="2026-03-07T01:17:48.422006101Z" level=info msg="StopPodSandbox for \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\" returns successfully" Mar 7 01:17:48.424508 containerd[1713]: time="2026-03-07T01:17:48.422605110Z" level=info msg="RemovePodSandbox for \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\"" Mar 7 01:17:48.424508 containerd[1713]: time="2026-03-07T01:17:48.422636811Z" level=info msg="Forcibly stopping sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\"" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.477 [WARNING][5911] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0", GenerateName:"calico-kube-controllers-7fd94b965c-", Namespace:"calico-system", SelfLink:"", UID:"a95025c4-1bc2-4e58-8c10-f597fc15642d", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd94b965c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152", Pod:"calico-kube-controllers-7fd94b965c-xvz8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8ee418e2a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.477 [INFO][5911] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.477 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" iface="eth0" netns="" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.477 [INFO][5911] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.477 [INFO][5911] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.512 [INFO][5918] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.512 [INFO][5918] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.512 [INFO][5918] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.521 [WARNING][5918] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.521 [INFO][5918] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" HandleID="k8s-pod-network.1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--kube--controllers--7fd94b965c--xvz8c-eth0" Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.522 [INFO][5918] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:17:48.527169 containerd[1713]: 2026-03-07 01:17:48.524 [INFO][5911] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295" Mar 7 01:17:48.527823 containerd[1713]: time="2026-03-07T01:17:48.527540026Z" level=info msg="TearDown network for sandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\" successfully" Mar 7 01:17:48.538425 containerd[1713]: time="2026-03-07T01:17:48.538098889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:48.538425 containerd[1713]: time="2026-03-07T01:17:48.538267692Z" level=info msg="RemovePodSandbox \"1187b29eeff30151e80c94bba7fe66ab2aa995ae32b0b3e6c1dd719e963fd295\" returns successfully" Mar 7 01:17:48.806683 containerd[1713]: time="2026-03-07T01:17:48.806559823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:48.809749 containerd[1713]: time="2026-03-07T01:17:48.809554369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:17:48.814290 containerd[1713]: time="2026-03-07T01:17:48.814219641Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:48.818848 containerd[1713]: time="2026-03-07T01:17:48.818796012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:48.819887 containerd[1713]: time="2026-03-07T01:17:48.819848628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.109295083s" Mar 7 01:17:48.819988 containerd[1713]: time="2026-03-07T01:17:48.819887629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:17:48.821628 containerd[1713]: time="2026-03-07T01:17:48.820893144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:17:48.828560 containerd[1713]: time="2026-03-07T01:17:48.828454161Z" level=info msg="CreateContainer within sandbox \"cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:17:48.859442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713890346.mount: Deactivated successfully. Mar 7 01:17:48.863182 containerd[1713]: time="2026-03-07T01:17:48.863126394Z" level=info msg="CreateContainer within sandbox \"cc5d73953cf80b12fd447f6ab93640c3c364d1d471830074a4b7209864d13864\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17\"" Mar 7 01:17:48.863893 containerd[1713]: time="2026-03-07T01:17:48.863757104Z" level=info msg="StartContainer for \"8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17\"" Mar 7 01:17:48.898321 systemd[1]: Started cri-containerd-8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17.scope - libcontainer container 8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17. Mar 7 01:17:48.951067 containerd[1713]: time="2026-03-07T01:17:48.950908646Z" level=info msg="StartContainer for \"8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17\" returns successfully" Mar 7 01:17:49.069289 systemd-networkd[1579]: cali11f9aed2106: Gained IPv6LL Mar 7 01:17:49.173851 kubelet[3183]: I0307 01:17:49.173769 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-q5ltk" podStartSLOduration=39.062318262 podStartE2EDuration="43.173745678s" podCreationTimestamp="2026-03-07 01:17:06 +0000 UTC" firstStartedPulling="2026-03-07 01:17:44.709284025 +0000 UTC m=+58.021101524" lastFinishedPulling="2026-03-07 01:17:48.820711441 +0000 UTC m=+62.132528940" observedRunningTime="2026-03-07 01:17:49.172428858 +0000 UTC m=+62.484246457" watchObservedRunningTime="2026-03-07 01:17:49.173745678 +0000 UTC m=+62.485563177" Mar 7 01:17:49.182665 systemd[1]: run-containerd-runc-k8s.io-8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17-runc.b1AJlv.mount: Deactivated successfully. Mar 7 01:17:51.333364 containerd[1713]: time="2026-03-07T01:17:51.333310454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:51.336823 containerd[1713]: time="2026-03-07T01:17:51.336765907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:17:51.340665 containerd[1713]: time="2026-03-07T01:17:51.340608766Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:51.346039 containerd[1713]: time="2026-03-07T01:17:51.345968749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:51.346930 containerd[1713]: time="2026-03-07T01:17:51.346670960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.525743216s" Mar 7 01:17:51.346930 containerd[1713]: time="2026-03-07T01:17:51.346711160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:17:51.348333 containerd[1713]: time="2026-03-07T01:17:51.348302985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:17:51.371694 containerd[1713]: time="2026-03-07T01:17:51.371533543Z" level=info msg="CreateContainer within sandbox \"2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:17:51.407794 containerd[1713]: time="2026-03-07T01:17:51.407750602Z" level=info msg="CreateContainer within sandbox \"2b46ada90aebddbed45d673f54980ff7e0e459d359be0fb46be7fa158937c152\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5688761682e75bf385a7ca26768afd44f173fb50eaa6c7a1fa24c1e8435444fe\"" Mar 7 01:17:51.409584 containerd[1713]: time="2026-03-07T01:17:51.408369811Z" level=info msg="StartContainer for \"5688761682e75bf385a7ca26768afd44f173fb50eaa6c7a1fa24c1e8435444fe\"" Mar 7 01:17:51.443355 systemd[1]: Started cri-containerd-5688761682e75bf385a7ca26768afd44f173fb50eaa6c7a1fa24c1e8435444fe.scope - libcontainer container 5688761682e75bf385a7ca26768afd44f173fb50eaa6c7a1fa24c1e8435444fe. Mar 7 01:17:51.490778 containerd[1713]: time="2026-03-07T01:17:51.490566878Z" level=info msg="StartContainer for \"5688761682e75bf385a7ca26768afd44f173fb50eaa6c7a1fa24c1e8435444fe\" returns successfully" Mar 7 01:17:52.177674 kubelet[3183]: I0307 01:17:52.177596 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fd94b965c-xvz8c" podStartSLOduration=38.710097352 podStartE2EDuration="45.17757387s" podCreationTimestamp="2026-03-07 01:17:07 +0000 UTC" firstStartedPulling="2026-03-07 01:17:44.880244558 +0000 UTC m=+58.192062057" lastFinishedPulling="2026-03-07 01:17:51.347720976 +0000 UTC m=+64.659538575" observedRunningTime="2026-03-07 01:17:52.175772542 +0000 UTC m=+65.487590141" watchObservedRunningTime="2026-03-07 01:17:52.17757387 +0000 UTC m=+65.489391369" Mar 7 01:17:52.659131 containerd[1713]: time="2026-03-07T01:17:52.659009192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:52.663665 containerd[1713]: time="2026-03-07T01:17:52.663521362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:17:52.670762 containerd[1713]: time="2026-03-07T01:17:52.670722273Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:52.677487 containerd[1713]: time="2026-03-07T01:17:52.677343275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:52.678433 containerd[1713]: time="2026-03-07T01:17:52.678301090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.329709601s" Mar 7 01:17:52.678433 containerd[1713]: time="2026-03-07T01:17:52.678342591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:17:52.679562 containerd[1713]: time="2026-03-07T01:17:52.679387007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:17:52.688708 containerd[1713]: time="2026-03-07T01:17:52.688673950Z" level=info msg="CreateContainer within sandbox \"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:17:52.737403 containerd[1713]: time="2026-03-07T01:17:52.737362300Z" level=info msg="CreateContainer within sandbox \"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4fcaf511065905488f796b1f95027d00323e4815064764426f6e21397bae88b7\"" Mar 7 01:17:52.738135 containerd[1713]: time="2026-03-07T01:17:52.738100112Z" level=info msg="StartContainer for \"4fcaf511065905488f796b1f95027d00323e4815064764426f6e21397bae88b7\"" Mar 7 01:17:52.776316 systemd[1]: Started cri-containerd-4fcaf511065905488f796b1f95027d00323e4815064764426f6e21397bae88b7.scope - libcontainer container 4fcaf511065905488f796b1f95027d00323e4815064764426f6e21397bae88b7. Mar 7 01:17:52.807047 containerd[1713]: time="2026-03-07T01:17:52.806884872Z" level=info msg="StartContainer for \"4fcaf511065905488f796b1f95027d00323e4815064764426f6e21397bae88b7\" returns successfully" Mar 7 01:17:55.890024 containerd[1713]: time="2026-03-07T01:17:55.889970305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:55.892562 containerd[1713]: time="2026-03-07T01:17:55.892500244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:17:55.895935 containerd[1713]: time="2026-03-07T01:17:55.895881396Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:55.901370 containerd[1713]: time="2026-03-07T01:17:55.900650570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:55.902368 containerd[1713]: time="2026-03-07T01:17:55.901936789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.222513382s" Mar 7 01:17:55.902368 containerd[1713]: time="2026-03-07T01:17:55.901975790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:17:55.903395 containerd[1713]: time="2026-03-07T01:17:55.903358411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:17:55.908400 containerd[1713]: time="2026-03-07T01:17:55.908362788Z" level=info msg="CreateContainer within sandbox \"071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:17:55.943805 containerd[1713]: time="2026-03-07T01:17:55.943755034Z" level=info msg="CreateContainer within sandbox \"071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"948254fd194ad4d44cb04a4e672d3e7f66321070f69472f695f8d9d079590cb7\"" Mar 7 01:17:55.945203 containerd[1713]: time="2026-03-07T01:17:55.944394544Z" level=info msg="StartContainer for \"948254fd194ad4d44cb04a4e672d3e7f66321070f69472f695f8d9d079590cb7\"" Mar 7 01:17:55.986607 systemd[1]: Started cri-containerd-948254fd194ad4d44cb04a4e672d3e7f66321070f69472f695f8d9d079590cb7.scope - libcontainer container 948254fd194ad4d44cb04a4e672d3e7f66321070f69472f695f8d9d079590cb7. Mar 7 01:17:56.037084 containerd[1713]: time="2026-03-07T01:17:56.037033472Z" level=info msg="StartContainer for \"948254fd194ad4d44cb04a4e672d3e7f66321070f69472f695f8d9d079590cb7\" returns successfully" Mar 7 01:17:56.248259 containerd[1713]: time="2026-03-07T01:17:56.248205428Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:56.251386 containerd[1713]: time="2026-03-07T01:17:56.251325976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:17:56.253939 containerd[1713]: time="2026-03-07T01:17:56.253880915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 350.476203ms" Mar 7 01:17:56.253939 containerd[1713]: time="2026-03-07T01:17:56.253937716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:17:56.257427 containerd[1713]: time="2026-03-07T01:17:56.257399870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:17:56.264749 containerd[1713]: time="2026-03-07T01:17:56.264677582Z" level=info msg="CreateContainer within sandbox \"1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:17:56.300805 containerd[1713]: time="2026-03-07T01:17:56.300655337Z" level=info msg="CreateContainer within sandbox \"1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"47334a6d14f10ebdc1a727c7106a16cf003ebf2094b903b01f9ba1850a2b0903\"" Mar 7 01:17:56.303395 containerd[1713]: time="2026-03-07T01:17:56.303356078Z" level=info msg="StartContainer for \"47334a6d14f10ebdc1a727c7106a16cf003ebf2094b903b01f9ba1850a2b0903\"" Mar 7 01:17:56.336350 systemd[1]: Started cri-containerd-47334a6d14f10ebdc1a727c7106a16cf003ebf2094b903b01f9ba1850a2b0903.scope - libcontainer container 47334a6d14f10ebdc1a727c7106a16cf003ebf2094b903b01f9ba1850a2b0903. Mar 7 01:17:56.395499 containerd[1713]: time="2026-03-07T01:17:56.395452898Z" level=info msg="StartContainer for \"47334a6d14f10ebdc1a727c7106a16cf003ebf2094b903b01f9ba1850a2b0903\" returns successfully" Mar 7 01:17:57.178409 kubelet[3183]: I0307 01:17:57.177592 3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:17:57.197624 kubelet[3183]: I0307 01:17:57.197543 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b85bc4966-hhwfv" podStartSLOduration=42.344002823 podStartE2EDuration="51.197519364s" podCreationTimestamp="2026-03-07 01:17:06 +0000 UTC" firstStartedPulling="2026-03-07 01:17:47.049580866 +0000 UTC m=+60.361398365" lastFinishedPulling="2026-03-07 01:17:55.903097407 +0000 UTC m=+69.214914906" observedRunningTime="2026-03-07 01:17:56.190909445 +0000 UTC m=+69.502726944" watchObservedRunningTime="2026-03-07 01:17:57.197519364 +0000 UTC m=+70.509336863" Mar 7 01:17:58.132341 containerd[1713]: time="2026-03-07T01:17:58.132137573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:58.134938 containerd[1713]: time="2026-03-07T01:17:58.134870115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:17:58.138220 containerd[1713]: time="2026-03-07T01:17:58.137994163Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:58.143645 containerd[1713]: time="2026-03-07T01:17:58.143589949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:17:58.145531 containerd[1713]: time="2026-03-07T01:17:58.144575065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.887136794s" Mar 7 01:17:58.145531 containerd[1713]: time="2026-03-07T01:17:58.144666166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:17:58.157229 containerd[1713]: time="2026-03-07T01:17:58.157141958Z" level=info msg="CreateContainer within sandbox \"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:17:58.170512 kubelet[3183]: I0307 01:17:58.168971 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b85bc4966-7842l" podStartSLOduration=43.677863374 podStartE2EDuration="52.16894844s" podCreationTimestamp="2026-03-07 01:17:06 +0000 UTC" firstStartedPulling="2026-03-07 01:17:47.763626962 +0000 UTC m=+61.075444561" lastFinishedPulling="2026-03-07 01:17:56.254712028 +0000 UTC m=+69.566529627" observedRunningTime="2026-03-07 01:17:57.198279375 +0000 UTC m=+70.510096974" watchObservedRunningTime="2026-03-07 01:17:58.16894844 +0000 UTC m=+71.480766039" Mar 7 01:17:58.203652 containerd[1713]: time="2026-03-07T01:17:58.203591375Z" level=info msg="CreateContainer within sandbox \"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cc17b17ee2b65897572a4c9fb919a5026deb0f8ceeebf555959dda944d09c09d\"" Mar 7 01:17:58.204480 containerd[1713]: time="2026-03-07T01:17:58.204445088Z" level=info msg="StartContainer for \"cc17b17ee2b65897572a4c9fb919a5026deb0f8ceeebf555959dda944d09c09d\"" Mar 7 01:17:58.261323 systemd[1]: Started cri-containerd-cc17b17ee2b65897572a4c9fb919a5026deb0f8ceeebf555959dda944d09c09d.scope - libcontainer container cc17b17ee2b65897572a4c9fb919a5026deb0f8ceeebf555959dda944d09c09d. Mar 7 01:17:58.321560 containerd[1713]: time="2026-03-07T01:17:58.321413591Z" level=info msg="StartContainer for \"cc17b17ee2b65897572a4c9fb919a5026deb0f8ceeebf555959dda944d09c09d\" returns successfully" Mar 7 01:17:58.883656 kubelet[3183]: I0307 01:17:58.883582 3183 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:17:58.883656 kubelet[3183]: I0307 01:17:58.883631 3183 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:17:59.202602 kubelet[3183]: I0307 01:17:59.202348 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ms76b" podStartSLOduration=41.03098349 podStartE2EDuration="52.202328264s" podCreationTimestamp="2026-03-07 01:17:07 +0000 UTC" firstStartedPulling="2026-03-07 01:17:46.976914047 +0000 UTC m=+60.288731546" lastFinishedPulling="2026-03-07 01:17:58.148258721 +0000 UTC m=+71.460076320" observedRunningTime="2026-03-07 01:17:59.201520851 +0000 UTC m=+72.513338350" watchObservedRunningTime="2026-03-07 01:17:59.202328264 +0000 UTC m=+72.514145763" Mar 7 01:18:18.407948 kubelet[3183]: I0307 01:18:18.407732 3183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:18:20.176590 systemd[1]: run-containerd-runc-k8s.io-8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17-runc.oXUnww.mount: Deactivated successfully. Mar 7 01:18:34.681508 systemd[1]: Started sshd@7-10.200.8.14:22-10.200.16.10:39380.service - OpenSSH per-connection server daemon (10.200.16.10:39380). Mar 7 01:18:35.087700 systemd[1]: run-containerd-runc-k8s.io-f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb-runc.wQdxOL.mount: Deactivated successfully. Mar 7 01:18:35.300474 sshd[6416]: Accepted publickey for core from 10.200.16.10 port 39380 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:35.302260 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:35.307250 systemd-logind[1683]: New session 10 of user core. Mar 7 01:18:35.313456 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:18:35.809475 sshd[6416]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:35.813563 systemd[1]: sshd@7-10.200.8.14:22-10.200.16.10:39380.service: Deactivated successfully. Mar 7 01:18:35.817697 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:18:35.818561 systemd-logind[1683]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:18:35.822421 systemd-logind[1683]: Removed session 10. Mar 7 01:18:40.932460 systemd[1]: Started sshd@8-10.200.8.14:22-10.200.16.10:42198.service - OpenSSH per-connection server daemon (10.200.16.10:42198). Mar 7 01:18:41.552419 sshd[6452]: Accepted publickey for core from 10.200.16.10 port 42198 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:41.553987 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:41.558776 systemd-logind[1683]: New session 11 of user core. Mar 7 01:18:41.565329 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:18:42.052935 sshd[6452]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:42.056506 systemd[1]: sshd@8-10.200.8.14:22-10.200.16.10:42198.service: Deactivated successfully. Mar 7 01:18:42.059021 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:18:42.060968 systemd-logind[1683]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:18:42.062810 systemd-logind[1683]: Removed session 11. Mar 7 01:18:47.169434 systemd[1]: Started sshd@9-10.200.8.14:22-10.200.16.10:42208.service - OpenSSH per-connection server daemon (10.200.16.10:42208). Mar 7 01:18:47.791384 sshd[6468]: Accepted publickey for core from 10.200.16.10 port 42208 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:47.792035 sshd[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:47.796966 systemd-logind[1683]: New session 12 of user core. Mar 7 01:18:47.803359 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:18:48.290660 sshd[6468]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:48.293840 systemd[1]: sshd@9-10.200.8.14:22-10.200.16.10:42208.service: Deactivated successfully. Mar 7 01:18:48.296209 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:18:48.299339 systemd-logind[1683]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:18:48.300425 systemd-logind[1683]: Removed session 12. Mar 7 01:18:48.542225 containerd[1713]: time="2026-03-07T01:18:48.542081572Z" level=info msg="StopPodSandbox for \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\"" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.575 [WARNING][6489] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"b5b460bf-5f22-4761-beda-066ecd458571", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991", Pod:"calico-apiserver-6b85bc4966-hhwfv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic9f1800d5a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.575 [INFO][6489] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.575 [INFO][6489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" iface="eth0" netns="" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.575 [INFO][6489] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.575 [INFO][6489] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.596 [INFO][6496] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.596 [INFO][6496] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.596 [INFO][6496] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.606 [WARNING][6496] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.606 [INFO][6496] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.607 [INFO][6496] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:18:48.610102 containerd[1713]: 2026-03-07 01:18:48.608 [INFO][6489] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.611001 containerd[1713]: time="2026-03-07T01:18:48.610198021Z" level=info msg="TearDown network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\" successfully" Mar 7 01:18:48.611001 containerd[1713]: time="2026-03-07T01:18:48.610250822Z" level=info msg="StopPodSandbox for \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\" returns successfully" Mar 7 01:18:48.611001 containerd[1713]: time="2026-03-07T01:18:48.610795530Z" level=info msg="RemovePodSandbox for \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\"" Mar 7 01:18:48.611001 containerd[1713]: time="2026-03-07T01:18:48.610836731Z" level=info msg="Forcibly stopping sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\"" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.644 [WARNING][6510] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"b5b460bf-5f22-4761-beda-066ecd458571", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"071f02bfd5d12cd1de330d81a125e0fb8babbd2c46cc42ffb46adc461ade0991", Pod:"calico-apiserver-6b85bc4966-hhwfv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic9f1800d5a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.644 [INFO][6510] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.644 [INFO][6510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" iface="eth0" netns="" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.644 [INFO][6510] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.644 [INFO][6510] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.665 [INFO][6517] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.665 [INFO][6517] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.665 [INFO][6517] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.672 [WARNING][6517] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.672 [INFO][6517] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" HandleID="k8s-pod-network.5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--hhwfv-eth0" Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.674 [INFO][6517] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:18:48.677281 containerd[1713]: 2026-03-07 01:18:48.675 [INFO][6510] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4" Mar 7 01:18:48.677281 containerd[1713]: time="2026-03-07T01:18:48.677107851Z" level=info msg="TearDown network for sandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\" successfully" Mar 7 01:18:48.690912 containerd[1713]: time="2026-03-07T01:18:48.690738761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:18:48.690912 containerd[1713]: time="2026-03-07T01:18:48.690817662Z" level=info msg="RemovePodSandbox \"5db38cefc244ce65e185b8368becbc80e583ad5f480fd374b839bb4e6ea915c4\" returns successfully" Mar 7 01:18:48.691527 containerd[1713]: time="2026-03-07T01:18:48.691304070Z" level=info msg="StopPodSandbox for \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\"" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.724 [WARNING][6531] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"422fccbd-8b44-46d1-b23a-b122dabbbb7c", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24", Pod:"csi-node-driver-ms76b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali924d4b8cb60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.725 [INFO][6531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.725 [INFO][6531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" iface="eth0" netns="" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.725 [INFO][6531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.725 [INFO][6531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.746 [INFO][6539] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.746 [INFO][6539] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.746 [INFO][6539] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.754 [WARNING][6539] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.754 [INFO][6539] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.755 [INFO][6539] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:18:48.758488 containerd[1713]: 2026-03-07 01:18:48.757 [INFO][6531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.759117 containerd[1713]: time="2026-03-07T01:18:48.758523305Z" level=info msg="TearDown network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\" successfully" Mar 7 01:18:48.759117 containerd[1713]: time="2026-03-07T01:18:48.758552605Z" level=info msg="StopPodSandbox for \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\" returns successfully" Mar 7 01:18:48.759117 containerd[1713]: time="2026-03-07T01:18:48.759027412Z" level=info msg="RemovePodSandbox for \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\"" Mar 7 01:18:48.759117 containerd[1713]: time="2026-03-07T01:18:48.759060313Z" level=info msg="Forcibly stopping sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\"" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.792 [WARNING][6554] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"422fccbd-8b44-46d1-b23a-b122dabbbb7c", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"be23ebfc4d7dfd3bcb76337592c67d17b36ee6e4f6245e2e467888f90043cc24", Pod:"csi-node-driver-ms76b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali924d4b8cb60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.792 [INFO][6554] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.792 [INFO][6554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" iface="eth0" netns="" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.792 [INFO][6554] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.792 [INFO][6554] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.814 [INFO][6561] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.814 [INFO][6561] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.814 [INFO][6561] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.821 [WARNING][6561] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.821 [INFO][6561] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" HandleID="k8s-pod-network.91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-csi--node--driver--ms76b-eth0" Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.822 [INFO][6561] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:18:48.827246 containerd[1713]: 2026-03-07 01:18:48.824 [INFO][6554] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d" Mar 7 01:18:48.827246 containerd[1713]: time="2026-03-07T01:18:48.825679639Z" level=info msg="TearDown network for sandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\" successfully" Mar 7 01:18:48.846735 containerd[1713]: time="2026-03-07T01:18:48.846690162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:18:48.846944 containerd[1713]: time="2026-03-07T01:18:48.846791664Z" level=info msg="RemovePodSandbox \"91f9da36de9a1f56466d8d1dcc7f5e295c7c0df0e5310b0a8368cd3a556c126d\" returns successfully" Mar 7 01:18:48.847263 containerd[1713]: time="2026-03-07T01:18:48.847233270Z" level=info msg="StopPodSandbox for \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\"" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.880 [WARNING][6577] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"5a80d881-150a-48f3-9427-5627e9d0bd10", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330", Pod:"calico-apiserver-6b85bc4966-7842l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali11f9aed2106", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.880 [INFO][6577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.880 [INFO][6577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" iface="eth0" netns="" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.880 [INFO][6577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.880 [INFO][6577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.902 [INFO][6584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.902 [INFO][6584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.902 [INFO][6584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.908 [WARNING][6584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.908 [INFO][6584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.910 [INFO][6584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:18:48.914775 containerd[1713]: 2026-03-07 01:18:48.912 [INFO][6577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.914775 containerd[1713]: time="2026-03-07T01:18:48.914559407Z" level=info msg="TearDown network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\" successfully" Mar 7 01:18:48.914775 containerd[1713]: time="2026-03-07T01:18:48.914593807Z" level=info msg="StopPodSandbox for \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\" returns successfully" Mar 7 01:18:48.915918 containerd[1713]: time="2026-03-07T01:18:48.915132616Z" level=info msg="RemovePodSandbox for \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\"" Mar 7 01:18:48.915918 containerd[1713]: time="2026-03-07T01:18:48.915192517Z" level=info msg="Forcibly stopping sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\"" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.949 [WARNING][6598] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0", GenerateName:"calico-apiserver-6b85bc4966-", Namespace:"calico-system", SelfLink:"", UID:"5a80d881-150a-48f3-9427-5627e9d0bd10", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 17, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b85bc4966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-baf9cf72b8", ContainerID:"1e7072089edffec22bcf7c5149f27703a785bc1e2960b1fc8f45974218980330", Pod:"calico-apiserver-6b85bc4966-7842l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali11f9aed2106", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.949 [INFO][6598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.949 [INFO][6598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" iface="eth0" netns="" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.949 [INFO][6598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.949 [INFO][6598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.969 [INFO][6605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.970 [INFO][6605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.970 [INFO][6605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.978 [WARNING][6605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.978 [INFO][6605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" HandleID="k8s-pod-network.c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Workload="ci--4081.3.6--n--baf9cf72b8-k8s-calico--apiserver--6b85bc4966--7842l-eth0" Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.979 [INFO][6605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:18:48.982468 containerd[1713]: 2026-03-07 01:18:48.981 [INFO][6598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed" Mar 7 01:18:48.983299 containerd[1713]: time="2026-03-07T01:18:48.982472153Z" level=info msg="TearDown network for sandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\" successfully" Mar 7 01:18:48.993307 containerd[1713]: time="2026-03-07T01:18:48.993267619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:18:48.993445 containerd[1713]: time="2026-03-07T01:18:48.993347320Z" level=info msg="RemovePodSandbox \"c6a9f2ebd66bad86d271619670c6f7e3dc4e7061d7c30c81ddf44a1aac9a8eed\" returns successfully" Mar 7 01:18:53.414468 systemd[1]: Started sshd@10-10.200.8.14:22-10.200.16.10:33508.service - OpenSSH per-connection server daemon (10.200.16.10:33508). Mar 7 01:18:54.038391 sshd[6652]: Accepted publickey for core from 10.200.16.10 port 33508 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:18:54.039968 sshd[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:18:54.044114 systemd-logind[1683]: New session 13 of user core. Mar 7 01:18:54.050069 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:18:54.542513 sshd[6652]: pam_unix(sshd:session): session closed for user core Mar 7 01:18:54.547135 systemd[1]: sshd@10-10.200.8.14:22-10.200.16.10:33508.service: Deactivated successfully. Mar 7 01:18:54.549667 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:18:54.550758 systemd-logind[1683]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:18:54.551895 systemd-logind[1683]: Removed session 13. Mar 7 01:18:59.658453 systemd[1]: Started sshd@11-10.200.8.14:22-10.200.16.10:33520.service - OpenSSH per-connection server daemon (10.200.16.10:33520). Mar 7 01:19:00.280014 sshd[6714]: Accepted publickey for core from 10.200.16.10 port 33520 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:00.281561 sshd[6714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:00.285563 systemd-logind[1683]: New session 14 of user core. Mar 7 01:19:00.291295 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:19:00.777785 sshd[6714]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:00.782417 systemd-logind[1683]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:19:00.783319 systemd[1]: sshd@11-10.200.8.14:22-10.200.16.10:33520.service: Deactivated successfully. Mar 7 01:19:00.785507 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:19:00.786933 systemd-logind[1683]: Removed session 14. Mar 7 01:19:00.899032 systemd[1]: Started sshd@12-10.200.8.14:22-10.200.16.10:55150.service - OpenSSH per-connection server daemon (10.200.16.10:55150). Mar 7 01:19:01.541478 sshd[6728]: Accepted publickey for core from 10.200.16.10 port 55150 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:01.543003 sshd[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:01.546975 systemd-logind[1683]: New session 15 of user core. Mar 7 01:19:01.553320 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:19:02.089569 sshd[6728]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:02.093216 systemd[1]: sshd@12-10.200.8.14:22-10.200.16.10:55150.service: Deactivated successfully. Mar 7 01:19:02.095611 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:19:02.097142 systemd-logind[1683]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:19:02.098940 systemd-logind[1683]: Removed session 15. Mar 7 01:19:02.211863 systemd[1]: Started sshd@13-10.200.8.14:22-10.200.16.10:55156.service - OpenSSH per-connection server daemon (10.200.16.10:55156). Mar 7 01:19:02.999023 sshd[6739]: Accepted publickey for core from 10.200.16.10 port 55156 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:03.000709 sshd[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:03.005191 systemd-logind[1683]: New session 16 of user core. Mar 7 01:19:03.013298 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:19:03.501094 sshd[6739]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:03.505236 systemd-logind[1683]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:19:03.505937 systemd[1]: sshd@13-10.200.8.14:22-10.200.16.10:55156.service: Deactivated successfully. Mar 7 01:19:03.507898 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:19:03.509299 systemd-logind[1683]: Removed session 16. Mar 7 01:19:05.088231 systemd[1]: run-containerd-runc-k8s.io-f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb-runc.xJrH6x.mount: Deactivated successfully. Mar 7 01:19:08.617447 systemd[1]: Started sshd@14-10.200.8.14:22-10.200.16.10:55158.service - OpenSSH per-connection server daemon (10.200.16.10:55158). Mar 7 01:19:09.255178 sshd[6814]: Accepted publickey for core from 10.200.16.10 port 55158 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:09.257117 sshd[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:09.268333 systemd-logind[1683]: New session 17 of user core. Mar 7 01:19:09.271574 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:19:09.806424 sshd[6814]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:09.811180 systemd[1]: sshd@14-10.200.8.14:22-10.200.16.10:55158.service: Deactivated successfully. Mar 7 01:19:09.814121 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:19:09.817501 systemd-logind[1683]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:19:09.819272 systemd-logind[1683]: Removed session 17. Mar 7 01:19:09.925323 systemd[1]: Started sshd@15-10.200.8.14:22-10.200.16.10:55172.service - OpenSSH per-connection server daemon (10.200.16.10:55172). Mar 7 01:19:10.561878 sshd[6826]: Accepted publickey for core from 10.200.16.10 port 55172 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:10.562528 sshd[6826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:10.566887 systemd-logind[1683]: New session 18 of user core. Mar 7 01:19:10.574305 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:19:11.124994 sshd[6826]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:11.129080 systemd[1]: sshd@15-10.200.8.14:22-10.200.16.10:55172.service: Deactivated successfully. Mar 7 01:19:11.131454 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:19:11.132218 systemd-logind[1683]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:19:11.133234 systemd-logind[1683]: Removed session 18. Mar 7 01:19:11.235758 systemd[1]: Started sshd@16-10.200.8.14:22-10.200.16.10:41436.service - OpenSSH per-connection server daemon (10.200.16.10:41436). Mar 7 01:19:11.867191 sshd[6836]: Accepted publickey for core from 10.200.16.10 port 41436 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:11.868830 sshd[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:11.874319 systemd-logind[1683]: New session 19 of user core. Mar 7 01:19:11.879325 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:19:12.928865 sshd[6836]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:12.933101 systemd[1]: sshd@16-10.200.8.14:22-10.200.16.10:41436.service: Deactivated successfully. Mar 7 01:19:12.935753 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:19:12.936564 systemd-logind[1683]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:19:12.937702 systemd-logind[1683]: Removed session 19. Mar 7 01:19:13.038353 systemd[1]: Started sshd@17-10.200.8.14:22-10.200.16.10:41444.service - OpenSSH per-connection server daemon (10.200.16.10:41444). Mar 7 01:19:13.671849 sshd[6862]: Accepted publickey for core from 10.200.16.10 port 41444 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:13.673418 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:13.677645 systemd-logind[1683]: New session 20 of user core. Mar 7 01:19:13.683299 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:19:14.287612 sshd[6862]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:14.291603 systemd[1]: sshd@17-10.200.8.14:22-10.200.16.10:41444.service: Deactivated successfully. Mar 7 01:19:14.294423 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:19:14.295577 systemd-logind[1683]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:19:14.297553 systemd-logind[1683]: Removed session 20. Mar 7 01:19:14.403509 systemd[1]: Started sshd@18-10.200.8.14:22-10.200.16.10:41450.service - OpenSSH per-connection server daemon (10.200.16.10:41450). Mar 7 01:19:15.025090 sshd[6873]: Accepted publickey for core from 10.200.16.10 port 41450 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:15.026717 sshd[6873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:15.031656 systemd-logind[1683]: New session 21 of user core. Mar 7 01:19:15.038329 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:19:15.528632 sshd[6873]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:15.532576 systemd-logind[1683]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:19:15.533334 systemd[1]: sshd@18-10.200.8.14:22-10.200.16.10:41450.service: Deactivated successfully. Mar 7 01:19:15.535733 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:19:15.536767 systemd-logind[1683]: Removed session 21. Mar 7 01:19:20.181263 systemd[1]: run-containerd-runc-k8s.io-8a06ac52ef04055b5d3797f7f2dc2346299339eb02a5de31e6ecb3a096c83a17-runc.9ObpVC.mount: Deactivated successfully. Mar 7 01:19:20.638809 systemd[1]: Started sshd@19-10.200.8.14:22-10.200.16.10:35600.service - OpenSSH per-connection server daemon (10.200.16.10:35600). Mar 7 01:19:21.263798 sshd[6910]: Accepted publickey for core from 10.200.16.10 port 35600 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:21.265489 sshd[6910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:21.270422 systemd-logind[1683]: New session 22 of user core. Mar 7 01:19:21.276582 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:19:21.765003 sshd[6910]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:21.770848 systemd-logind[1683]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:19:21.771443 systemd[1]: sshd@19-10.200.8.14:22-10.200.16.10:35600.service: Deactivated successfully. Mar 7 01:19:21.773971 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:19:21.775410 systemd-logind[1683]: Removed session 22. Mar 7 01:19:22.183576 systemd[1]: run-containerd-runc-k8s.io-5688761682e75bf385a7ca26768afd44f173fb50eaa6c7a1fa24c1e8435444fe-runc.JIEf63.mount: Deactivated successfully. Mar 7 01:19:26.876718 systemd[1]: Started sshd@20-10.200.8.14:22-10.200.16.10:35604.service - OpenSSH per-connection server daemon (10.200.16.10:35604). Mar 7 01:19:27.507646 sshd[6944]: Accepted publickey for core from 10.200.16.10 port 35604 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:27.509809 sshd[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:27.514938 systemd-logind[1683]: New session 23 of user core. Mar 7 01:19:27.521304 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:19:28.008810 sshd[6944]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:28.011896 systemd[1]: sshd@20-10.200.8.14:22-10.200.16.10:35604.service: Deactivated successfully. Mar 7 01:19:28.014361 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:19:28.016091 systemd-logind[1683]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:19:28.017549 systemd-logind[1683]: Removed session 23. Mar 7 01:19:33.124402 systemd[1]: Started sshd@21-10.200.8.14:22-10.200.16.10:35322.service - OpenSSH per-connection server daemon (10.200.16.10:35322). Mar 7 01:19:33.752421 sshd[6977]: Accepted publickey for core from 10.200.16.10 port 35322 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:33.754039 sshd[6977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:33.758724 systemd-logind[1683]: New session 24 of user core. Mar 7 01:19:33.763586 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:19:34.251807 sshd[6977]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:34.256399 systemd-logind[1683]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:19:34.257283 systemd[1]: sshd@21-10.200.8.14:22-10.200.16.10:35322.service: Deactivated successfully. Mar 7 01:19:34.259588 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:19:34.260579 systemd-logind[1683]: Removed session 24. Mar 7 01:19:35.090141 systemd[1]: run-containerd-runc-k8s.io-f16200eb7f742d58ab4c8e7e25fe7d5cef631b2bb567d96a0269581f5c1c66bb-runc.jnJ87c.mount: Deactivated successfully. Mar 7 01:19:39.372288 systemd[1]: Started sshd@22-10.200.8.14:22-10.200.16.10:35336.service - OpenSSH per-connection server daemon (10.200.16.10:35336). Mar 7 01:19:40.007520 sshd[7012]: Accepted publickey for core from 10.200.16.10 port 35336 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:40.009496 sshd[7012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:40.018963 systemd-logind[1683]: New session 25 of user core. Mar 7 01:19:40.022788 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:19:40.524548 sshd[7012]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:40.528405 systemd-logind[1683]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:19:40.529129 systemd[1]: sshd@22-10.200.8.14:22-10.200.16.10:35336.service: Deactivated successfully. Mar 7 01:19:40.531612 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:19:40.532633 systemd-logind[1683]: Removed session 25. Mar 7 01:19:45.640468 systemd[1]: Started sshd@23-10.200.8.14:22-10.200.16.10:56410.service - OpenSSH per-connection server daemon (10.200.16.10:56410). Mar 7 01:19:46.267274 sshd[7025]: Accepted publickey for core from 10.200.16.10 port 56410 ssh2: RSA SHA256:dGgiFlJn95ZB1/o+Rc5uXj9aOXViOkPyb1pRVqpDD8Q Mar 7 01:19:46.268885 sshd[7025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:19:46.273930 systemd-logind[1683]: New session 26 of user core. Mar 7 01:19:46.283343 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:19:46.773437 sshd[7025]: pam_unix(sshd:session): session closed for user core Mar 7 01:19:46.782272 systemd[1]: sshd@23-10.200.8.14:22-10.200.16.10:56410.service: Deactivated successfully. Mar 7 01:19:46.787904 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:19:46.789885 systemd-logind[1683]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:19:46.791552 systemd-logind[1683]: Removed session 26.