Mar 12 01:22:51.655612 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:22:51.655760 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:22:51.655778 kernel: BIOS-provided physical RAM map: Mar 12 01:22:51.655789 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 12 01:22:51.655799 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 12 01:22:51.655808 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 12 01:22:51.655815 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 12 01:22:51.655821 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 12 01:22:51.655827 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 12 01:22:51.655864 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 12 01:22:51.655874 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 12 01:22:51.655880 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 12 01:22:51.655914 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 12 01:22:51.655921 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 12 01:22:51.655952 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 12 01:22:51.655992 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 12 01:22:51.656004 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 12 01:22:51.656010 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 12 01:22:51.656017 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 12 01:22:51.656023 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:22:51.656029 kernel: NX (Execute Disable) protection: active Mar 12 01:22:51.656035 kernel: APIC: Static calls initialized Mar 12 01:22:51.656041 kernel: efi: EFI v2.7 by EDK II Mar 12 01:22:51.656047 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 12 01:22:51.656053 kernel: SMBIOS 2.8 present. Mar 12 01:22:51.656059 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 12 01:22:51.656065 kernel: Hypervisor detected: KVM Mar 12 01:22:51.656075 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:22:51.656081 kernel: kvm-clock: using sched offset of 10144191327 cycles Mar 12 01:22:51.656087 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:22:51.656094 kernel: tsc: Detected 2445.424 MHz processor Mar 12 01:22:51.656100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:22:51.656107 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:22:51.656113 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 12 01:22:51.656119 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 12 01:22:51.656126 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:22:51.656135 kernel: Using GB pages for direct mapping Mar 12 01:22:51.656141 kernel: Secure boot disabled Mar 12 01:22:51.656148 kernel: ACPI: Early table checksum verification disabled Mar 12 01:22:51.656154 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 12 01:22:51.656165 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 12 01:22:51.656172 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:22:51.656178 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:22:51.656188 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 12 01:22:51.656221 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:22:51.656228 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:22:51.656235 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:22:51.656241 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:22:51.656248 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 12 01:22:51.656324 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 12 01:22:51.656337 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 12 01:22:51.656344 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 12 01:22:51.656350 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 12 01:22:51.656357 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 12 01:22:51.656363 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 12 01:22:51.656370 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 12 01:22:51.656377 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 12 01:22:51.656383 kernel: No NUMA configuration found Mar 12 01:22:51.656416 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 12 01:22:51.656427 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 12 01:22:51.656434 kernel: Zone ranges: Mar 12 01:22:51.656441 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:22:51.656448 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 12 01:22:51.656454 kernel: Normal empty Mar 12 01:22:51.656461 kernel: Movable zone start for each node Mar 12 01:22:51.656467 kernel: Early memory node ranges Mar 12 01:22:51.656474 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 12 01:22:51.656480 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 12 01:22:51.656487 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 12 01:22:51.656497 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 12 01:22:51.656503 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 12 01:22:51.656510 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 12 01:22:51.656540 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 12 01:22:51.656570 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:22:51.656577 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 12 01:22:51.656584 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 12 01:22:51.656590 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:22:51.656617 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 12 01:22:51.656625 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 12 01:22:51.656635 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 12 01:22:51.656664 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:22:51.656671 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:22:51.656677 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:22:51.656705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:22:51.656712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:22:51.656718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:22:51.656746 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:22:51.656752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:22:51.656783 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:22:51.656790 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:22:51.656797 kernel: TSC deadline timer available Mar 12 01:22:51.656823 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:22:51.656830 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:22:51.656837 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:22:51.656863 kernel: kvm-guest: setup PV sched yield Mar 12 01:22:51.656870 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 12 01:22:51.656896 kernel: Booting paravirtualized kernel on KVM Mar 12 01:22:51.656907 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:22:51.656914 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:22:51.656946 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:22:51.657000 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:22:51.657008 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:22:51.657014 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:22:51.657021 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:22:51.657029 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:22:51.657085 kernel: random: crng init done Mar 12 01:22:51.657095 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:22:51.657101 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:22:51.657108 kernel: Fallback order for Node 0: 0 Mar 12 01:22:51.657115 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 12 01:22:51.657126 kernel: Policy zone: DMA32 Mar 12 01:22:51.657138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:22:51.657149 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 12 01:22:51.657156 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:22:51.657167 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:22:51.657174 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:22:51.657180 kernel: Dynamic Preempt: voluntary Mar 12 01:22:51.657187 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:22:51.657212 kernel: rcu: RCU event tracing is enabled. Mar 12 01:22:51.657230 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:22:51.657239 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:22:51.657246 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:22:51.657323 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:22:51.657331 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:22:51.657337 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:22:51.657345 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:22:51.657358 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:22:51.657365 kernel: Console: colour dummy device 80x25 Mar 12 01:22:51.657371 kernel: printk: console [ttyS0] enabled Mar 12 01:22:51.657406 kernel: ACPI: Core revision 20230628 Mar 12 01:22:51.657414 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:22:51.657424 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:22:51.657431 kernel: x2apic enabled Mar 12 01:22:51.657438 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:22:51.657445 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:22:51.657452 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:22:51.657459 kernel: kvm-guest: setup PV IPIs Mar 12 01:22:51.657466 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:22:51.657473 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:22:51.657480 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 12 01:22:51.657490 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:22:51.657497 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:22:51.657504 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:22:51.657511 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:22:51.657518 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:22:51.657525 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:22:51.657531 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:22:51.657538 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:22:51.657546 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:22:51.657556 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:22:51.657563 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:22:51.657595 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:22:51.657602 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:22:51.657609 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:22:51.657616 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:22:51.657623 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:22:51.657630 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:22:51.657640 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:22:51.657647 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:22:51.657654 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:22:51.657661 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:22:51.657668 kernel: landlock: Up and running. Mar 12 01:22:51.657675 kernel: SELinux: Initializing. Mar 12 01:22:51.657682 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:22:51.657689 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:22:51.657695 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:22:51.657705 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:22:51.657712 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:22:51.657719 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:22:51.657726 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:22:51.657733 kernel: signal: max sigframe size: 1776 Mar 12 01:22:51.657740 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:22:51.657747 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:22:51.657754 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:22:51.657761 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:22:51.657777 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:22:51.657790 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:22:51.657800 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:22:51.657807 kernel: smpboot: Max logical packages: 1 Mar 12 01:22:51.657819 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 12 01:22:51.657833 kernel: devtmpfs: initialized Mar 12 01:22:51.657844 kernel: x86/mm: Memory block size: 128MB Mar 12 01:22:51.657857 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 12 01:22:51.657872 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 12 01:22:51.657891 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 12 01:22:51.657950 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 12 01:22:51.658010 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 12 01:22:51.658017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:22:51.658024 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:22:51.658031 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:22:51.658038 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:22:51.658045 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:22:51.658052 kernel: audit: type=2000 audit(1773278566.444:1): state=initialized audit_enabled=0 res=1 Mar 12 01:22:51.658064 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:22:51.658071 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:22:51.658077 kernel: cpuidle: using governor menu Mar 12 01:22:51.658084 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:22:51.658091 kernel: dca service started, version 1.12.1 Mar 12 01:22:51.658099 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:22:51.658105 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:22:51.658112 kernel: PCI: Using configuration type 1 for base access Mar 12 01:22:51.658119 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:22:51.658129 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:22:51.658136 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:22:51.658143 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:22:51.658150 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:22:51.658157 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:22:51.658164 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:22:51.658171 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:22:51.658178 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:22:51.658185 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:22:51.658194 kernel: ACPI: Interpreter enabled Mar 12 01:22:51.658201 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:22:51.658208 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:22:51.658215 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:22:51.658222 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:22:51.658229 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:22:51.658236 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:22:51.659053 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:22:51.659347 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:22:51.659521 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:22:51.659541 kernel: PCI host bridge to bus 0000:00 Mar 12 01:22:51.659816 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:22:51.660039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:22:51.660235 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:22:51.660466 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:22:51.660626 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:22:51.660806 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 12 01:22:51.660946 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:22:51.661540 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:22:51.661862 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:22:51.662096 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 12 01:22:51.662403 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 12 01:22:51.662574 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 12 01:22:51.662732 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 12 01:22:51.662943 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:22:51.663331 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:22:51.663543 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 12 01:22:51.663695 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 12 01:22:51.663909 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 12 01:22:51.664320 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:22:51.664549 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 12 01:22:51.664763 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 12 01:22:51.664953 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 12 01:22:51.665325 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:22:51.665491 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 12 01:22:51.665670 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 12 01:22:51.665823 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 12 01:22:51.666019 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 12 01:22:51.666446 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:22:51.666615 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:22:51.667063 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:22:51.667478 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 12 01:22:51.667776 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 12 01:22:51.668199 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:22:51.668659 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 12 01:22:51.668710 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:22:51.668753 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:22:51.668785 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:22:51.668792 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:22:51.668827 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:22:51.668835 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:22:51.668864 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:22:51.668872 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:22:51.668881 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:22:51.668894 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:22:51.668907 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:22:51.668920 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:22:51.668934 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:22:51.669010 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:22:51.669029 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:22:51.669042 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:22:51.669054 kernel: iommu: Default domain type: Translated Mar 12 01:22:51.669066 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:22:51.669077 kernel: efivars: Registered efivars operations Mar 12 01:22:51.669089 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:22:51.669100 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:22:51.669112 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 12 01:22:51.669124 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 12 01:22:51.669132 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 12 01:22:51.669139 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 12 01:22:51.669641 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:22:51.669886 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:22:51.670107 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:22:51.670121 kernel: vgaarb: loaded Mar 12 01:22:51.670128 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:22:51.670141 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:22:51.670162 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:22:51.670169 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:22:51.670176 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:22:51.670183 kernel: pnp: PnP ACPI init Mar 12 01:22:51.670517 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:22:51.670533 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:22:51.670540 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:22:51.670547 kernel: NET: Registered PF_INET protocol family Mar 12 01:22:51.670560 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:22:51.670568 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:22:51.670581 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:22:51.670594 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:22:51.670606 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:22:51.670613 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:22:51.670620 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:22:51.670628 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:22:51.670639 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:22:51.670645 kernel: NET: Registered PF_XDP protocol family Mar 12 01:22:51.670818 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 12 01:22:51.671025 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 12 01:22:51.671242 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:22:51.671474 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:22:51.671658 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:22:51.671822 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:22:51.672041 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:22:51.672324 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 12 01:22:51.672340 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:22:51.672348 kernel: Initialise system trusted keyrings Mar 12 01:22:51.672355 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:22:51.672362 kernel: Key type asymmetric registered Mar 12 01:22:51.672369 kernel: Asymmetric key parser 'x509' registered Mar 12 01:22:51.672376 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:22:51.672383 kernel: io scheduler mq-deadline registered Mar 12 01:22:51.672395 kernel: io scheduler kyber registered Mar 12 01:22:51.672402 kernel: io scheduler bfq registered Mar 12 01:22:51.672409 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:22:51.672417 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:22:51.672424 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:22:51.672431 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:22:51.672439 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:22:51.672446 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:22:51.672453 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:22:51.672463 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:22:51.672470 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:22:51.672778 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:22:51.672800 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:22:51.673043 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:22:51.673224 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:22:50 UTC (1773278570) Mar 12 01:22:51.673505 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:22:51.673527 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:22:51.673544 kernel: efifb: probing for efifb Mar 12 01:22:51.673551 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 12 01:22:51.673558 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 12 01:22:51.673565 kernel: efifb: scrolling: redraw Mar 12 01:22:51.673574 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 12 01:22:51.673587 kernel: Console: switching to colour frame buffer device 100x37 Mar 12 01:22:51.673600 kernel: fb0: EFI VGA frame buffer device Mar 12 01:22:51.673613 kernel: pstore: Using crash dump compression: deflate Mar 12 01:22:51.673627 kernel: pstore: Registered efi_pstore as persistent store backend Mar 12 01:22:51.673646 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:22:51.673659 kernel: Segment Routing with IPv6 Mar 12 01:22:51.673673 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:22:51.673685 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:22:51.673698 kernel: Key type dns_resolver registered Mar 12 01:22:51.673711 kernel: IPI shorthand broadcast: enabled Mar 12 01:22:51.673760 kernel: sched_clock: Marking stable (3967027215, 571716147)->(4765940464, -227197102) Mar 12 01:22:51.673779 kernel: registered taskstats version 1 Mar 12 01:22:51.673792 kernel: Loading compiled-in X.509 certificates Mar 12 01:22:51.673811 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:22:51.673824 kernel: Key type .fscrypt registered Mar 12 01:22:51.673838 kernel: Key type fscrypt-provisioning registered Mar 12 01:22:51.673852 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:22:51.673864 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:22:51.673876 kernel: ima: No architecture policies found Mar 12 01:22:51.673883 kernel: clk: Disabling unused clocks Mar 12 01:22:51.673890 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:22:51.673904 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:22:51.673926 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:22:51.673940 kernel: Run /init as init process Mar 12 01:22:51.673953 kernel: with arguments: Mar 12 01:22:51.674022 kernel: /init Mar 12 01:22:51.674036 kernel: with environment: Mar 12 01:22:51.674044 kernel: HOME=/ Mar 12 01:22:51.674051 kernel: TERM=linux Mar 12 01:22:51.674094 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:22:51.674110 systemd[1]: Detected virtualization kvm. Mar 12 01:22:51.674118 systemd[1]: Detected architecture x86-64. Mar 12 01:22:51.674126 systemd[1]: Running in initrd. Mar 12 01:22:51.674133 systemd[1]: No hostname configured, using default hostname. Mar 12 01:22:51.674140 systemd[1]: Hostname set to . Mar 12 01:22:51.674149 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:22:51.674156 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:22:51.674164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:22:51.674175 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:22:51.674184 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:22:51.674195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:22:51.674202 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:22:51.674213 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:22:51.674225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:22:51.674233 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:22:51.674241 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:22:51.674248 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:22:51.674315 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:22:51.674323 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:22:51.674335 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:22:51.674342 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:22:51.674350 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:22:51.674358 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:22:51.674366 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:22:51.674376 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:22:51.674390 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:22:51.674402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:22:51.674417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:22:51.674438 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:22:51.674452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:22:51.674466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:22:51.674479 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:22:51.674495 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:22:51.674509 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:22:51.674523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:22:51.674538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:22:51.674594 systemd-journald[195]: Collecting audit messages is disabled. Mar 12 01:22:51.674621 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:22:51.674629 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:22:51.674638 systemd-journald[195]: Journal started Mar 12 01:22:51.674664 systemd-journald[195]: Runtime Journal (/run/log/journal/9dee298f48b14f3286829d49ad4c51cc) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:22:51.675715 systemd-modules-load[196]: Inserted module 'overlay' Mar 12 01:22:51.698414 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:22:51.699474 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:22:51.707799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:22:51.739348 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:22:51.743819 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 12 01:22:51.759026 kernel: Bridge firewalling registered Mar 12 01:22:51.751727 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:22:51.768708 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:22:51.786124 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:22:51.795581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:22:51.796733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:22:51.797531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:22:51.857685 dracut-cmdline[221]: dracut-dracut-053 Mar 12 01:22:51.857685 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:22:51.801630 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:22:51.803945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:22:51.814536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:22:51.860610 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:22:51.871187 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:22:51.912627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:22:51.961745 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:22:52.036501 kernel: SCSI subsystem initialized Mar 12 01:22:52.059507 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:22:52.083505 kernel: iscsi: registered transport (tcp) Mar 12 01:22:52.133472 systemd-resolved[288]: Positive Trust Anchors: Mar 12 01:22:52.139724 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:22:52.139762 kernel: QLogic iSCSI HBA Driver Mar 12 01:22:52.133739 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:22:52.133790 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:22:52.156851 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 12 01:22:52.162532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:22:52.193046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:22:52.230908 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:22:52.254804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:22:52.316951 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:22:52.317430 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:22:52.317457 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:22:52.412047 kernel: raid6: avx2x4 gen() 18355 MB/s Mar 12 01:22:52.430422 kernel: raid6: avx2x2 gen() 18880 MB/s Mar 12 01:22:52.467105 kernel: raid6: avx2x1 gen() 11825 MB/s Mar 12 01:22:52.468693 kernel: raid6: using algorithm avx2x2 gen() 18880 MB/s Mar 12 01:22:52.491012 kernel: raid6: .... xor() 14375 MB/s, rmw enabled Mar 12 01:22:52.491222 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:22:52.528533 kernel: xor: automatically using best checksumming function avx Mar 12 01:22:52.841733 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:22:52.909190 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:22:52.939393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:22:53.019846 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 12 01:22:53.032918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:22:53.070023 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:22:53.133049 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 12 01:22:53.233187 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:22:53.271838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:22:53.438445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:22:53.482713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:22:53.514114 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:22:53.521459 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:22:53.537174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:22:53.553546 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:22:53.581764 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:22:53.582394 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:22:53.584691 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:22:53.618180 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:22:53.618547 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:22:53.618571 kernel: GPT:9289727 != 19775487 Mar 12 01:22:53.618603 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:22:53.618624 kernel: GPT:9289727 != 19775487 Mar 12 01:22:53.618640 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:22:53.618655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:22:53.594863 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:22:53.595161 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:22:53.636518 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:22:53.652857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:22:53.705768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:22:53.722211 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:22:53.739842 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Mar 12 01:22:53.760154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:22:53.786440 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (469) Mar 12 01:22:53.774642 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:22:53.794447 kernel: libata version 3.00 loaded. Mar 12 01:22:53.803895 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:22:53.804604 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:22:53.818583 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:22:53.819037 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:22:53.840373 kernel: scsi host0: ahci Mar 12 01:22:53.852536 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:22:53.852584 kernel: AES CTR mode by8 optimization enabled Mar 12 01:22:53.859577 kernel: scsi host1: ahci Mar 12 01:22:53.860555 kernel: scsi host2: ahci Mar 12 01:22:53.859532 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:22:53.886140 kernel: scsi host3: ahci Mar 12 01:22:53.886508 kernel: scsi host4: ahci Mar 12 01:22:53.886812 kernel: scsi host5: ahci Mar 12 01:22:53.887224 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Mar 12 01:22:53.887238 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Mar 12 01:22:53.887328 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Mar 12 01:22:53.887343 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Mar 12 01:22:53.887361 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Mar 12 01:22:53.875136 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:22:53.907636 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Mar 12 01:22:53.905190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:22:53.912069 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:22:53.923746 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:22:53.949872 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:22:53.956544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:22:53.956632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:22:53.961917 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:22:53.988684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:22:54.000401 disk-uuid[545]: Primary Header is updated. Mar 12 01:22:54.000401 disk-uuid[545]: Secondary Entries is updated. Mar 12 01:22:54.000401 disk-uuid[545]: Secondary Header is updated. Mar 12 01:22:54.016194 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:22:54.016231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:22:54.024358 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:22:54.031336 kernel: block device autoloading is deprecated and will be removed. Mar 12 01:22:54.036522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:22:54.072773 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:22:54.128200 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:22:54.208362 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:22:54.213523 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:22:54.218397 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:22:54.224641 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:22:54.224684 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:22:54.224698 kernel: ata3.00: applying bridge limits Mar 12 01:22:54.231464 kernel: ata3.00: configured for UDMA/100 Mar 12 01:22:54.238617 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:22:54.238832 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:22:54.248753 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:22:54.315695 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:22:54.316379 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:22:54.332517 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:22:55.027413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:22:55.029369 disk-uuid[546]: The operation has completed successfully. Mar 12 01:22:55.102078 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:22:55.102492 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:22:55.135647 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:22:55.159047 sh[600]: Success Mar 12 01:22:55.187397 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:22:55.280914 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:22:55.307908 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:22:55.317244 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:22:55.358904 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:22:55.359077 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:22:55.359094 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:22:55.368617 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:22:55.368676 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:22:55.388410 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:22:55.389563 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:22:55.397553 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:22:55.435432 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:22:55.435473 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:22:55.435490 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:22:55.411776 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:22:55.453893 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:22:55.474358 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:22:55.483623 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:22:55.496648 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:22:55.514534 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:22:55.723894 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:22:55.757225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:22:55.812998 systemd-networkd[782]: lo: Link UP Mar 12 01:22:55.813049 systemd-networkd[782]: lo: Gained carrier Mar 12 01:22:55.816445 systemd-networkd[782]: Enumeration completed Mar 12 01:22:55.816655 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:22:55.819123 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:22:55.819128 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:22:55.821100 systemd-networkd[782]: eth0: Link UP Mar 12 01:22:55.821105 systemd-networkd[782]: eth0: Gained carrier Mar 12 01:22:55.821114 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:22:55.826068 systemd[1]: Reached target network.target - Network. Mar 12 01:22:55.895733 ignition[692]: Ignition 2.19.0 Mar 12 01:22:55.895746 ignition[692]: Stage: fetch-offline Mar 12 01:22:55.895873 ignition[692]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:22:55.895897 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:22:55.896196 ignition[692]: parsed url from cmdline: "" Mar 12 01:22:55.896204 ignition[692]: no config URL provided Mar 12 01:22:55.896218 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:22:55.896237 ignition[692]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:22:55.896378 ignition[692]: op(1): [started] loading QEMU firmware config module Mar 12 01:22:55.896389 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:22:55.914004 ignition[692]: op(1): [finished] loading QEMU firmware config module Mar 12 01:22:55.969669 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:22:56.259540 ignition[692]: parsing config with SHA512: f33311f127136743ce67e384eb2034ee2947b584210e06809763570feb0d7baa498ef81abb464ffcceb34c736db59ad08c4c9c2cd8b357125433927cf8d27e9c Mar 12 01:22:56.292711 unknown[692]: fetched base config from "system" Mar 12 01:22:56.292796 unknown[692]: fetched user config from "qemu" Mar 12 01:22:56.300116 ignition[692]: fetch-offline: fetch-offline passed Mar 12 01:22:56.300333 ignition[692]: Ignition finished successfully Mar 12 01:22:56.309395 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:22:56.314756 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:22:56.341720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:22:56.648701 ignition[792]: Ignition 2.19.0 Mar 12 01:22:56.648842 ignition[792]: Stage: kargs Mar 12 01:22:56.650494 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:22:56.665737 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:22:56.650517 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:22:56.653922 ignition[792]: kargs: kargs passed Mar 12 01:22:56.654110 ignition[792]: Ignition finished successfully Mar 12 01:22:56.698046 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:22:56.766588 ignition[800]: Ignition 2.19.0 Mar 12 01:22:56.766632 ignition[800]: Stage: disks Mar 12 01:22:56.771129 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:22:56.766898 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:22:56.777773 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:22:56.766919 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:22:56.785550 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:22:56.768431 ignition[800]: disks: disks passed Mar 12 01:22:56.796604 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:22:56.768523 ignition[800]: Ignition finished successfully Mar 12 01:22:56.802048 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:22:56.807331 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:22:56.843447 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:22:56.920403 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:22:56.928031 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:22:56.962912 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:22:57.144680 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:22:57.145670 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:22:57.155911 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:22:57.187665 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:22:57.204554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:22:57.223747 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 12 01:22:57.223785 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:22:57.223805 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:22:57.223823 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:22:57.210155 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:22:57.256762 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:22:57.210227 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:22:57.210376 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:22:57.239241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:22:57.256888 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:22:57.285947 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:22:57.324547 systemd-networkd[782]: eth0: Gained IPv6LL Mar 12 01:22:57.367652 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:22:57.379455 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:22:57.393335 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:22:57.402526 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:22:57.795923 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:22:57.825548 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:22:57.835954 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:22:57.849845 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:22:57.851628 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:22:57.882073 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:22:57.921615 ignition[930]: INFO : Ignition 2.19.0 Mar 12 01:22:57.921615 ignition[930]: INFO : Stage: mount Mar 12 01:22:57.929342 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:22:57.929342 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:22:57.929342 ignition[930]: INFO : mount: mount passed Mar 12 01:22:57.929342 ignition[930]: INFO : Ignition finished successfully Mar 12 01:22:57.926385 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:22:57.960593 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:22:57.977082 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:22:58.032802 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Mar 12 01:22:58.033104 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:22:58.050868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:22:58.051710 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:22:58.084050 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:22:58.105563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:22:58.267739 ignition[962]: INFO : Ignition 2.19.0 Mar 12 01:22:58.267739 ignition[962]: INFO : Stage: files Mar 12 01:22:58.280210 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:22:58.280210 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:22:58.280210 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:22:58.280210 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:22:58.280210 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:22:58.332557 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:22:58.332557 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:22:58.332557 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:22:58.330659 unknown[962]: wrote ssh authorized keys file for user: core Mar 12 01:22:58.359463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:22:58.359463 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:22:58.405493 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:22:58.863845 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:22:58.863845 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:22:58.879367 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 12 01:22:59.357606 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 01:23:01.738926 kernel: hrtimer: interrupt took 16608814 ns Mar 12 01:23:02.055830 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 01:23:02.055830 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 12 01:23:02.071062 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:23:02.136530 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:23:02.136530 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:23:02.136530 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:23:02.136530 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:23:02.136530 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:23:02.136530 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:23:02.136530 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:23:02.136530 ignition[962]: INFO : files: files passed Mar 12 01:23:02.136530 ignition[962]: INFO : Ignition finished successfully Mar 12 01:23:02.116423 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:23:02.158608 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:23:02.171760 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:23:02.181479 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:23:02.244840 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:23:02.181691 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:23:02.255481 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:23:02.255481 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:23:02.202879 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:23:02.275656 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:23:02.208461 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:23:02.218899 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:23:02.276868 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:23:02.277136 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:23:02.442751 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:23:02.449110 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:23:02.455122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:23:02.474735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:23:02.498727 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:23:02.501616 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:23:02.533066 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:23:02.545661 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:23:02.556352 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:23:02.564707 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:23:02.569532 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:23:02.582587 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:23:02.591720 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:23:02.601812 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:23:02.614099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:23:02.628076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:23:02.637537 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:23:02.646687 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:23:02.657413 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:23:02.665890 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:23:02.676798 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:23:02.685188 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:23:02.690481 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:23:02.703521 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:23:02.715553 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:23:02.730438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:23:02.736246 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:23:02.749623 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:23:02.753596 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:23:02.762665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:23:02.767224 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:23:02.777597 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:23:02.786020 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:23:02.791110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:23:02.804063 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:23:02.812763 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:23:02.820845 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:23:02.824950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:23:02.834368 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:23:02.838529 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:23:02.848537 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:23:02.853842 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:23:02.865633 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:23:02.873066 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:23:02.895679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:23:02.909686 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:23:02.922649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:23:02.943615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:23:02.969823 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:23:02.976617 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:23:02.997175 ignition[1016]: INFO : Ignition 2.19.0 Mar 12 01:23:02.997175 ignition[1016]: INFO : Stage: umount Mar 12 01:23:02.997175 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:23:02.997175 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:23:03.030512 ignition[1016]: INFO : umount: umount passed Mar 12 01:23:03.030512 ignition[1016]: INFO : Ignition finished successfully Mar 12 01:23:03.072227 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:23:03.083611 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:23:03.089103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:23:03.111131 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:23:03.117403 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:23:03.131226 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:23:03.135167 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:23:03.150648 systemd[1]: Stopped target network.target - Network. Mar 12 01:23:03.155415 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:23:03.163231 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:23:03.166449 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:23:03.166550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:23:03.183682 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:23:03.183772 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:23:03.199655 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:23:03.199768 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:23:03.216020 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:23:03.216114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:23:03.227782 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:23:03.238882 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:23:03.261679 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:23:03.262051 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:23:03.273021 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:23:03.273110 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:23:03.292510 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 12 01:23:03.297601 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:23:03.302193 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:23:03.312726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:23:03.312853 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:23:03.336447 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:23:03.336583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:23:03.336648 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:23:03.345347 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:23:03.345405 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:23:03.363557 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:23:03.363619 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:23:03.372410 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:23:03.400703 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:23:03.401373 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:23:03.425470 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:23:03.425636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:23:03.435603 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:23:03.435667 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:23:03.444694 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:23:03.444780 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:23:03.446441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:23:03.446511 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:23:03.449042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:23:03.449126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:23:03.478623 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:23:03.489069 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:23:03.489170 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:23:03.500030 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 01:23:03.500105 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:23:03.509135 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:23:03.509232 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:23:03.523500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:23:03.523758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:23:03.534478 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:23:03.534740 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:23:03.544064 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:23:03.544365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:23:03.556944 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:23:03.591745 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:23:03.602074 systemd[1]: Switching root. Mar 12 01:23:03.643597 systemd-journald[195]: Journal stopped Mar 12 01:23:05.810055 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 12 01:23:05.810156 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:23:05.810186 kernel: SELinux: policy capability open_perms=1 Mar 12 01:23:05.810207 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:23:05.810228 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:23:05.810247 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:23:05.810346 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:23:05.810365 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:23:05.810384 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:23:05.810409 kernel: audit: type=1403 audit(1773278583.926:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:23:05.810438 systemd[1]: Successfully loaded SELinux policy in 105.244ms. Mar 12 01:23:05.810477 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 41.010ms. Mar 12 01:23:05.810498 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:23:05.810519 systemd[1]: Detected virtualization kvm. Mar 12 01:23:05.810540 systemd[1]: Detected architecture x86-64. Mar 12 01:23:05.810561 systemd[1]: Detected first boot. Mar 12 01:23:05.810579 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:23:05.810598 zram_generator::config[1060]: No configuration found. Mar 12 01:23:05.810623 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:23:05.810643 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:23:05.810663 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:23:05.810692 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:23:05.810717 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:23:05.810736 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:23:05.810764 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:23:05.810783 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:23:05.810808 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:23:05.810828 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:23:05.810848 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:23:05.810866 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:23:05.810885 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:23:05.810905 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:23:05.810925 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:23:05.810945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:23:05.811015 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:23:05.811043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:23:05.811064 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:23:05.811083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:23:05.811103 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:23:05.811123 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:23:05.811142 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:23:05.811161 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:23:05.811187 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:23:05.811212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:23:05.811233 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:23:05.811332 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:23:05.811357 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:23:05.811378 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:23:05.811399 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:23:05.811418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:23:05.811437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:23:05.811457 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:23:05.811483 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:23:05.811504 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:23:05.811523 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:23:05.811542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:23:05.811563 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:23:05.811585 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:23:05.811602 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:23:05.811622 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:23:05.811648 systemd[1]: Reached target machines.target - Containers. Mar 12 01:23:05.811668 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:23:05.811690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:23:05.811707 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:23:05.811730 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:23:05.811750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:23:05.811779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:23:05.811798 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:23:05.811817 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:23:05.811842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:23:05.811862 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:23:05.811883 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:23:05.811903 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:23:05.811921 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:23:05.811941 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:23:05.812013 kernel: fuse: init (API version 7.39) Mar 12 01:23:05.812032 kernel: loop: module loaded Mar 12 01:23:05.812051 kernel: ACPI: bus type drm_connector registered Mar 12 01:23:05.812078 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:23:05.812099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:23:05.812119 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:23:05.812140 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:23:05.812193 systemd-journald[1144]: Collecting audit messages is disabled. Mar 12 01:23:05.812237 systemd-journald[1144]: Journal started Mar 12 01:23:05.812358 systemd-journald[1144]: Runtime Journal (/run/log/journal/9dee298f48b14f3286829d49ad4c51cc) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:23:05.107851 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:23:05.136933 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:23:05.137751 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:23:05.138386 systemd[1]: systemd-journald.service: Consumed 2.630s CPU time. Mar 12 01:23:05.825754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:23:05.833517 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:23:05.833572 systemd[1]: Stopped verity-setup.service. Mar 12 01:23:05.846494 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:23:05.857399 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:23:05.859336 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:23:05.864550 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:23:05.869555 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:23:05.875087 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:23:05.880645 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:23:05.885571 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:23:05.890396 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:23:05.895641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:23:05.901199 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:23:05.901547 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:23:05.906658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:23:05.906918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:23:05.912079 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:23:05.912404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:23:05.917173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:23:05.917688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:23:05.923057 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:23:05.923393 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:23:05.928155 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:23:05.928603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:23:05.933352 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:23:05.938631 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:23:05.944840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:23:05.969490 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:23:05.985557 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:23:05.994113 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:23:06.000399 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:23:06.000461 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:23:06.007348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:23:06.020600 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:23:06.028320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:23:06.034246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:23:06.042802 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:23:06.051364 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:23:06.057097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:23:06.059567 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:23:06.065833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:23:06.070565 systemd-journald[1144]: Time spent on flushing to /var/log/journal/9dee298f48b14f3286829d49ad4c51cc is 44.158ms for 987 entries. Mar 12 01:23:06.070565 systemd-journald[1144]: System Journal (/var/log/journal/9dee298f48b14f3286829d49ad4c51cc) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:23:06.215650 systemd-journald[1144]: Received client request to flush runtime journal. Mar 12 01:23:06.090066 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:23:06.110629 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:23:06.123671 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:23:06.134439 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:23:06.141678 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:23:06.148835 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:23:06.156504 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:23:06.170528 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:23:06.209429 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:23:06.229067 kernel: loop0: detected capacity change from 0 to 219192 Mar 12 01:23:06.231681 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:23:06.253574 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:23:06.260225 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:23:06.291390 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:23:06.312436 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:23:06.314143 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:23:06.321674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:23:06.332615 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 12 01:23:06.333825 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 12 01:23:06.333844 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Mar 12 01:23:06.340396 kernel: loop1: detected capacity change from 0 to 142488 Mar 12 01:23:06.352917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:23:06.367702 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:23:06.450341 kernel: loop2: detected capacity change from 0 to 140768 Mar 12 01:23:06.537892 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:23:06.560048 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:23:06.581416 kernel: loop3: detected capacity change from 0 to 219192 Mar 12 01:23:06.633404 kernel: loop4: detected capacity change from 0 to 142488 Mar 12 01:23:06.670093 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 12 01:23:06.670628 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 12 01:23:06.679714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:23:06.688414 kernel: loop5: detected capacity change from 0 to 140768 Mar 12 01:23:06.713597 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:23:06.714536 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 12 01:23:06.724889 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:23:06.724942 systemd[1]: Reloading... Mar 12 01:23:07.295482 zram_generator::config[1225]: No configuration found. Mar 12 01:23:07.979655 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:23:08.028762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:23:08.104445 systemd[1]: Reloading finished in 1378 ms. Mar 12 01:23:08.139626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:23:08.148185 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:23:08.174832 systemd[1]: Starting ensure-sysext.service... Mar 12 01:23:08.180334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:23:08.193778 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:23:08.193842 systemd[1]: Reloading... Mar 12 01:23:08.295699 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:23:08.296452 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:23:08.301414 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:23:08.301873 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 12 01:23:08.302134 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 12 01:23:08.311953 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:23:08.312074 systemd-tmpfiles[1268]: Skipping /boot Mar 12 01:23:08.323385 zram_generator::config[1303]: No configuration found. Mar 12 01:23:08.715118 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:23:08.715175 systemd-tmpfiles[1268]: Skipping /boot Mar 12 01:23:08.838826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:23:09.010576 systemd[1]: Reloading finished in 815 ms. Mar 12 01:23:09.036550 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:23:09.063617 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:23:09.101847 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:23:09.110940 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:23:09.120735 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:23:09.137811 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:23:09.159574 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:23:09.183830 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:23:09.225767 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:23:09.234631 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:23:09.242753 augenrules[1355]: No rules Mar 12 01:23:09.247161 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Mar 12 01:23:09.262636 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:23:09.280374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:23:09.281066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:23:09.290883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:23:09.298942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:23:09.312239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:23:09.323767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:23:09.332941 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:23:09.340461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:23:09.345752 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:23:09.358359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:23:09.367874 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:23:09.376436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:23:09.376702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:23:09.384799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:23:09.385195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:23:09.404823 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:23:09.405212 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:23:09.411443 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:23:09.436843 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:23:09.463405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1379) Mar 12 01:23:09.483631 systemd-resolved[1344]: Positive Trust Anchors: Mar 12 01:23:09.483661 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:23:09.483708 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:23:09.490614 systemd-resolved[1344]: Defaulting to hostname 'linux'. Mar 12 01:23:09.493490 systemd[1]: Finished ensure-sysext.service. Mar 12 01:23:09.502458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:23:09.519908 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:23:09.523470 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:23:09.531327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:23:09.531621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:23:09.543155 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:23:09.621107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:23:09.630090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:23:09.646662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:23:09.653776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:23:09.661098 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:23:09.681633 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:23:09.687755 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:23:09.687807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:23:09.691043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:23:09.691504 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:23:09.701880 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:23:09.702477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:23:09.708777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:23:09.709802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:23:09.717209 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:23:09.717705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:23:09.725418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:23:09.769548 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:23:09.808862 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 12 01:23:10.280218 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:23:10.309077 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:23:10.312756 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:23:10.315609 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:23:10.315378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:23:10.315462 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:23:10.524347 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:23:10.542919 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:23:10.576905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:23:10.601655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:23:10.602099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:23:10.627060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:23:10.676705 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:23:10.684140 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:23:10.686515 systemd-networkd[1408]: lo: Link UP Mar 12 01:23:10.686524 systemd-networkd[1408]: lo: Gained carrier Mar 12 01:23:10.690056 systemd-networkd[1408]: Enumeration completed Mar 12 01:23:10.692037 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:23:10.692087 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:23:10.695205 systemd-networkd[1408]: eth0: Link UP Mar 12 01:23:10.695220 systemd-networkd[1408]: eth0: Gained carrier Mar 12 01:23:10.695248 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:23:10.696187 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:23:10.706089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:23:10.717553 systemd[1]: Reached target network.target - Network. Mar 12 01:23:10.777646 systemd-networkd[1408]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:23:10.780759 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 12 01:23:10.782927 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:23:10.801588 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:23:10.801720 systemd-timesyncd[1409]: Initial clock synchronization to Thu 2026-03-12 01:23:10.924918 UTC. Mar 12 01:23:11.569670 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:23:11.618030 kernel: kvm_amd: TSC scaling supported Mar 12 01:23:11.618187 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:23:11.618220 kernel: kvm_amd: Nested Paging enabled Mar 12 01:23:11.618244 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:23:11.625408 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:23:11.641649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:23:11.762534 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:23:11.805550 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:23:11.825817 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:23:11.860422 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:23:11.964588 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:23:11.975639 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:23:11.981549 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:23:11.988596 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:23:11.995376 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:23:12.003010 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:23:12.009987 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:23:12.016209 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:23:12.021543 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:23:12.021613 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:23:12.025677 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:23:12.030946 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:23:12.038534 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:23:12.056226 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:23:12.065823 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:23:12.072474 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:23:12.078365 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:23:12.084091 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:23:12.085743 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:23:12.090022 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:23:12.090147 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:23:12.092918 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:23:12.101523 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:23:12.110531 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:23:12.122761 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:23:12.127713 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:23:12.130691 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:23:12.139867 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:23:12.147616 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:23:12.158687 jq[1441]: false Mar 12 01:23:12.173719 systemd-networkd[1408]: eth0: Gained IPv6LL Mar 12 01:23:12.188041 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:23:12.201509 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:23:12.210069 extend-filesystems[1442]: Found loop3 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found loop4 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found loop5 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found sr0 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda1 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda2 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda3 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found usr Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda4 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda6 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda7 Mar 12 01:23:12.210069 extend-filesystems[1442]: Found vda9 Mar 12 01:23:12.210069 extend-filesystems[1442]: Checking size of /dev/vda9 Mar 12 01:23:12.407846 dbus-daemon[1440]: [system] SELinux support is enabled Mar 12 01:23:12.211450 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:23:12.478516 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:23:12.478576 extend-filesystems[1442]: Resized partition /dev/vda9 Mar 12 01:23:12.212020 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:23:12.505736 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:23:12.522028 update_engine[1457]: I20260312 01:23:12.463056 1457 main.cc:92] Flatcar Update Engine starting Mar 12 01:23:12.522028 update_engine[1457]: I20260312 01:23:12.492204 1457 update_check_scheduler.cc:74] Next update check in 3m39s Mar 12 01:23:12.215631 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:23:12.228712 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:23:12.531624 jq[1459]: true Mar 12 01:23:12.238984 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:23:12.532087 tar[1462]: linux-amd64/LICENSE Mar 12 01:23:12.532087 tar[1462]: linux-amd64/helm Mar 12 01:23:12.266843 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:23:12.267174 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:23:12.267905 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:23:12.547712 jq[1473]: true Mar 12 01:23:12.269789 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:23:12.287885 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:23:12.566364 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:23:12.290409 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:23:12.308127 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:23:12.325138 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:23:12.349599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:23:12.379893 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:23:12.408176 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:23:12.415445 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:23:12.415477 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:23:12.422489 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:23:12.422514 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:23:12.448223 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:23:12.451623 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:23:12.489528 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:23:12.525035 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:23:12.531485 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:23:12.532717 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:23:12.720245 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:23:12.705063 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:23:12.727433 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:23:12.753755 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:23:12.753851 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:23:12.754638 systemd-logind[1453]: New seat seat0. Mar 12 01:23:12.755906 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:23:12.755906 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:23:12.755906 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:23:12.789140 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Mar 12 01:23:12.764693 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:23:12.766015 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:23:12.789519 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:23:12.816406 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1379) Mar 12 01:23:12.907555 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:23:12.920870 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:23:12.928048 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:23:12.944719 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:23:12.952517 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:23:13.026182 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:23:13.026757 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:23:13.188872 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:23:13.250903 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:23:13.480072 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:23:13.506092 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:23:13.666515 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:23:13.682508 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:23:15.153749 containerd[1475]: time="2026-03-12T01:23:15.152646519Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:23:15.244502 containerd[1475]: time="2026-03-12T01:23:15.243825276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:23:15.250998 containerd[1475]: time="2026-03-12T01:23:15.250945626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.251439878Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.251479658Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.251846476Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.251868336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.252037444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.252056634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.252549958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.252568371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.252583267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.252593366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253415 containerd[1475]: time="2026-03-12T01:23:15.252814147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:23:15.253732 containerd[1475]: time="2026-03-12T01:23:15.253711822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:23:15.254005 containerd[1475]: time="2026-03-12T01:23:15.253982018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:23:15.254069 containerd[1475]: time="2026-03-12T01:23:15.254055833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:23:15.254514 containerd[1475]: time="2026-03-12T01:23:15.254491326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:23:15.254696 containerd[1475]: time="2026-03-12T01:23:15.254675814Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:23:15.264590 containerd[1475]: time="2026-03-12T01:23:15.264549683Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:23:15.264957 containerd[1475]: time="2026-03-12T01:23:15.264840257Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:23:15.265098 containerd[1475]: time="2026-03-12T01:23:15.265076972Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:23:15.265193 containerd[1475]: time="2026-03-12T01:23:15.265172386Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:23:15.265396 containerd[1475]: time="2026-03-12T01:23:15.265368544Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:23:15.265838 containerd[1475]: time="2026-03-12T01:23:15.265815779Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:23:15.267623 containerd[1475]: time="2026-03-12T01:23:15.267602129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:23:15.268005 containerd[1475]: time="2026-03-12T01:23:15.267973856Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:23:15.268086 containerd[1475]: time="2026-03-12T01:23:15.268069915Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:23:15.268138 containerd[1475]: time="2026-03-12T01:23:15.268124933Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:23:15.268187 containerd[1475]: time="2026-03-12T01:23:15.268174490Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268339 containerd[1475]: time="2026-03-12T01:23:15.268239567Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268445 containerd[1475]: time="2026-03-12T01:23:15.268427754Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268557 containerd[1475]: time="2026-03-12T01:23:15.268542437Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268608 containerd[1475]: time="2026-03-12T01:23:15.268596326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268655 containerd[1475]: time="2026-03-12T01:23:15.268643727Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268777 containerd[1475]: time="2026-03-12T01:23:15.268748452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268849 containerd[1475]: time="2026-03-12T01:23:15.268834724Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:23:15.268923 containerd[1475]: time="2026-03-12T01:23:15.268908531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.268976 containerd[1475]: time="2026-03-12T01:23:15.268962783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269089 containerd[1475]: time="2026-03-12T01:23:15.269058881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269173 containerd[1475]: time="2026-03-12T01:23:15.269157168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269237 containerd[1475]: time="2026-03-12T01:23:15.269223292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269574 containerd[1475]: time="2026-03-12T01:23:15.269545322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269653 containerd[1475]: time="2026-03-12T01:23:15.269638136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269706 containerd[1475]: time="2026-03-12T01:23:15.269691864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269812 containerd[1475]: time="2026-03-12T01:23:15.269788839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269874 containerd[1475]: time="2026-03-12T01:23:15.269860650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.269968 containerd[1475]: time="2026-03-12T01:23:15.269945753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.270116 containerd[1475]: time="2026-03-12T01:23:15.270097505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.270174 containerd[1475]: time="2026-03-12T01:23:15.270161846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.270342 containerd[1475]: time="2026-03-12T01:23:15.270325149Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:23:15.270526 containerd[1475]: time="2026-03-12T01:23:15.270507298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.270597 containerd[1475]: time="2026-03-12T01:23:15.270583382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.270644 containerd[1475]: time="2026-03-12T01:23:15.270631627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:23:15.271377 containerd[1475]: time="2026-03-12T01:23:15.270878541Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:23:15.271468 containerd[1475]: time="2026-03-12T01:23:15.271448241Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:23:15.271586 containerd[1475]: time="2026-03-12T01:23:15.271569960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:23:15.271639 containerd[1475]: time="2026-03-12T01:23:15.271622327Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:23:15.271726 containerd[1475]: time="2026-03-12T01:23:15.271700597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.271811 containerd[1475]: time="2026-03-12T01:23:15.271789309Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:23:15.271912 containerd[1475]: time="2026-03-12T01:23:15.271889267Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:23:15.271985 containerd[1475]: time="2026-03-12T01:23:15.271967407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:23:15.273536 containerd[1475]: time="2026-03-12T01:23:15.273419874Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:23:15.276417 containerd[1475]: time="2026-03-12T01:23:15.274484799Z" level=info msg="Connect containerd service" Mar 12 01:23:15.276417 containerd[1475]: time="2026-03-12T01:23:15.274815798Z" level=info msg="using legacy CRI server" Mar 12 01:23:15.276417 containerd[1475]: time="2026-03-12T01:23:15.274831349Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:23:15.276417 containerd[1475]: time="2026-03-12T01:23:15.275636432Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:23:15.279027 containerd[1475]: time="2026-03-12T01:23:15.278709379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:23:15.281464 containerd[1475]: time="2026-03-12T01:23:15.280059158Z" level=info msg="Start subscribing containerd event" Mar 12 01:23:15.281896 containerd[1475]: time="2026-03-12T01:23:15.281810676Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:23:15.282152 containerd[1475]: time="2026-03-12T01:23:15.281937948Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:23:15.284098 containerd[1475]: time="2026-03-12T01:23:15.284025435Z" level=info msg="Start recovering state" Mar 12 01:23:15.284339 containerd[1475]: time="2026-03-12T01:23:15.284209690Z" level=info msg="Start event monitor" Mar 12 01:23:15.284371 containerd[1475]: time="2026-03-12T01:23:15.284339935Z" level=info msg="Start snapshots syncer" Mar 12 01:23:15.284487 containerd[1475]: time="2026-03-12T01:23:15.284385954Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:23:15.284487 containerd[1475]: time="2026-03-12T01:23:15.284434432Z" level=info msg="Start streaming server" Mar 12 01:23:15.284882 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:23:15.292694 containerd[1475]: time="2026-03-12T01:23:15.285157789Z" level=info msg="containerd successfully booted in 0.135515s" Mar 12 01:23:15.773988 tar[1462]: linux-amd64/README.md Mar 12 01:23:16.050953 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:23:17.480986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:23:17.486521 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:23:17.491501 systemd[1]: Startup finished in 4.235s (kernel) + 12.877s (initrd) + 13.666s (userspace) = 30.779s. Mar 12 01:23:17.500957 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:23:19.765753 kubelet[1553]: E0312 01:23:19.765240 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:23:19.771518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:23:19.771873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:23:19.772819 systemd[1]: kubelet.service: Consumed 6.185s CPU time. Mar 12 01:23:21.446954 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:23:21.449544 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:36902.service - OpenSSH per-connection server daemon (10.0.0.1:36902). Mar 12 01:23:21.554025 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 36902 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:21.557937 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:21.573012 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:23:21.583712 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:23:21.586186 systemd-logind[1453]: New session 1 of user core. Mar 12 01:23:21.635488 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:23:21.654841 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:23:21.659686 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:23:21.865004 systemd[1570]: Queued start job for default target default.target. Mar 12 01:23:21.877587 systemd[1570]: Created slice app.slice - User Application Slice. Mar 12 01:23:21.877656 systemd[1570]: Reached target paths.target - Paths. Mar 12 01:23:21.877672 systemd[1570]: Reached target timers.target - Timers. Mar 12 01:23:21.880421 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:23:21.899442 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:23:21.899724 systemd[1570]: Reached target sockets.target - Sockets. Mar 12 01:23:21.899802 systemd[1570]: Reached target basic.target - Basic System. Mar 12 01:23:21.899869 systemd[1570]: Reached target default.target - Main User Target. Mar 12 01:23:21.899933 systemd[1570]: Startup finished in 229ms. Mar 12 01:23:21.900181 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:23:21.914538 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:23:21.991241 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:49844.service - OpenSSH per-connection server daemon (10.0.0.1:49844). Mar 12 01:23:22.115760 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 49844 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:22.118654 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:22.162213 systemd-logind[1453]: New session 2 of user core. Mar 12 01:23:22.176720 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:23:22.314771 sshd[1581]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:22.420679 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:49844.service: Deactivated successfully. Mar 12 01:23:22.424125 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:23:22.426603 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:23:22.439917 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:49856.service - OpenSSH per-connection server daemon (10.0.0.1:49856). Mar 12 01:23:22.441117 systemd-logind[1453]: Removed session 2. Mar 12 01:23:22.596692 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 49856 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:22.600077 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:22.609012 systemd-logind[1453]: New session 3 of user core. Mar 12 01:23:22.623655 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:23:22.679844 sshd[1588]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:22.692965 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:49856.service: Deactivated successfully. Mar 12 01:23:22.695526 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:23:22.697462 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:23:22.733188 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:49872.service - OpenSSH per-connection server daemon (10.0.0.1:49872). Mar 12 01:23:22.736730 systemd-logind[1453]: Removed session 3. Mar 12 01:23:22.791396 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 49872 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:22.795828 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:22.805603 systemd-logind[1453]: New session 4 of user core. Mar 12 01:23:22.816131 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:23:22.936451 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:22.955045 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:49872.service: Deactivated successfully. Mar 12 01:23:22.957693 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:23:22.960235 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:23:22.975973 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:49876.service - OpenSSH per-connection server daemon (10.0.0.1:49876). Mar 12 01:23:22.978005 systemd-logind[1453]: Removed session 4. Mar 12 01:23:23.020738 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 49876 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:23.023598 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:23.031993 systemd-logind[1453]: New session 5 of user core. Mar 12 01:23:23.041674 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:23:23.132049 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:23:23.132593 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:23:23.158957 sudo[1606]: pam_unix(sudo:session): session closed for user root Mar 12 01:23:23.163081 sshd[1603]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:23.175663 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:49876.service: Deactivated successfully. Mar 12 01:23:23.179144 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:23:23.182015 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:23:23.195500 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:49892.service - OpenSSH per-connection server daemon (10.0.0.1:49892). Mar 12 01:23:23.197607 systemd-logind[1453]: Removed session 5. Mar 12 01:23:23.245679 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 49892 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:23.248463 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:23.255715 systemd-logind[1453]: New session 6 of user core. Mar 12 01:23:23.281694 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:23:23.348439 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:23:23.348893 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:23:23.356205 sudo[1615]: pam_unix(sudo:session): session closed for user root Mar 12 01:23:23.365779 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:23:23.366195 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:23:23.394763 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:23:23.439430 auditctl[1618]: No rules Mar 12 01:23:23.441158 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:23:23.441702 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:23:23.495486 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:23:23.564730 augenrules[1636]: No rules Mar 12 01:23:23.566896 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:23:23.568675 sudo[1614]: pam_unix(sudo:session): session closed for user root Mar 12 01:23:23.571393 sshd[1611]: pam_unix(sshd:session): session closed for user core Mar 12 01:23:23.585842 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:49892.service: Deactivated successfully. Mar 12 01:23:23.587894 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:23:23.589688 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:23:23.598832 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:49896.service - OpenSSH per-connection server daemon (10.0.0.1:49896). Mar 12 01:23:23.600458 systemd-logind[1453]: Removed session 6. Mar 12 01:23:23.649802 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 49896 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:23:23.651522 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:23:23.658383 systemd-logind[1453]: New session 7 of user core. Mar 12 01:23:23.668458 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:23:23.728013 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:23:23.728568 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:23:25.830880 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:23:25.831425 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:23:27.648040 dockerd[1665]: time="2026-03-12T01:23:27.647570647Z" level=info msg="Starting up" Mar 12 01:23:27.967759 systemd[1]: var-lib-docker-metacopy\x2dcheck2232654836-merged.mount: Deactivated successfully. Mar 12 01:23:28.020593 dockerd[1665]: time="2026-03-12T01:23:28.020479756Z" level=info msg="Loading containers: start." Mar 12 01:23:28.241512 kernel: Initializing XFRM netlink socket Mar 12 01:23:28.411338 systemd-networkd[1408]: docker0: Link UP Mar 12 01:23:28.441156 dockerd[1665]: time="2026-03-12T01:23:28.440752675Z" level=info msg="Loading containers: done." Mar 12 01:23:28.492402 dockerd[1665]: time="2026-03-12T01:23:28.492223051Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:23:28.492612 dockerd[1665]: time="2026-03-12T01:23:28.492441570Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:23:28.492612 dockerd[1665]: time="2026-03-12T01:23:28.492590391Z" level=info msg="Daemon has completed initialization" Mar 12 01:23:28.579482 dockerd[1665]: time="2026-03-12T01:23:28.579110963Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:23:28.580878 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:23:29.740194 containerd[1475]: time="2026-03-12T01:23:29.739652704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 12 01:23:30.007032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:23:30.045115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:23:30.563130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479398114.mount: Deactivated successfully. Mar 12 01:23:30.619772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:23:30.626847 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:23:30.777310 kubelet[1831]: E0312 01:23:30.777046 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:23:30.783085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:23:30.783412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:23:32.471726 containerd[1475]: time="2026-03-12T01:23:32.471063557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:32.471726 containerd[1475]: time="2026-03-12T01:23:32.471574181Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 12 01:23:32.474223 containerd[1475]: time="2026-03-12T01:23:32.473753892Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:32.478077 containerd[1475]: time="2026-03-12T01:23:32.478019048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:32.480377 containerd[1475]: time="2026-03-12T01:23:32.480195724Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.740384147s" Mar 12 01:23:32.480377 containerd[1475]: time="2026-03-12T01:23:32.480332382Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 12 01:23:32.485132 containerd[1475]: time="2026-03-12T01:23:32.485089684Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 12 01:23:34.341183 containerd[1475]: time="2026-03-12T01:23:34.340431722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:34.341183 containerd[1475]: time="2026-03-12T01:23:34.341124021Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 12 01:23:34.344588 containerd[1475]: time="2026-03-12T01:23:34.343087906Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:34.348457 containerd[1475]: time="2026-03-12T01:23:34.348235672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:34.350143 containerd[1475]: time="2026-03-12T01:23:34.350065512Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.864903242s" Mar 12 01:23:34.350143 containerd[1475]: time="2026-03-12T01:23:34.350119377Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 12 01:23:34.354546 containerd[1475]: time="2026-03-12T01:23:34.354511188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 12 01:23:35.839678 containerd[1475]: time="2026-03-12T01:23:35.839036433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:35.841548 containerd[1475]: time="2026-03-12T01:23:35.840364606Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 12 01:23:35.842223 containerd[1475]: time="2026-03-12T01:23:35.842129648Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:35.847530 containerd[1475]: time="2026-03-12T01:23:35.847414312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:35.849220 containerd[1475]: time="2026-03-12T01:23:35.848995502Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.494342373s" Mar 12 01:23:35.849220 containerd[1475]: time="2026-03-12T01:23:35.849055011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 12 01:23:35.853371 containerd[1475]: time="2026-03-12T01:23:35.853194618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 12 01:23:37.309570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226951771.mount: Deactivated successfully. Mar 12 01:23:38.936612 containerd[1475]: time="2026-03-12T01:23:38.935077620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:38.936612 containerd[1475]: time="2026-03-12T01:23:38.936397115Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 12 01:23:38.939870 containerd[1475]: time="2026-03-12T01:23:38.938720026Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:38.942596 containerd[1475]: time="2026-03-12T01:23:38.942494263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:38.943329 containerd[1475]: time="2026-03-12T01:23:38.943233922Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 3.089995724s" Mar 12 01:23:38.943414 containerd[1475]: time="2026-03-12T01:23:38.943330032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 12 01:23:38.949467 containerd[1475]: time="2026-03-12T01:23:38.948427319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 12 01:23:39.428641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840774528.mount: Deactivated successfully. Mar 12 01:23:41.017164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 01:23:41.029708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:23:41.389833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:23:41.392148 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:23:41.688482 kubelet[1964]: E0312 01:23:41.687592 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:23:41.693987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:23:41.694393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:23:41.828974 containerd[1475]: time="2026-03-12T01:23:41.828874901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:41.829739 containerd[1475]: time="2026-03-12T01:23:41.829650898Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 12 01:23:41.832610 containerd[1475]: time="2026-03-12T01:23:41.832529932Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:41.838383 containerd[1475]: time="2026-03-12T01:23:41.838231202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:41.839839 containerd[1475]: time="2026-03-12T01:23:41.839744677Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.891267804s" Mar 12 01:23:41.839839 containerd[1475]: time="2026-03-12T01:23:41.839823381Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 12 01:23:41.844372 containerd[1475]: time="2026-03-12T01:23:41.844228609Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 01:23:42.378754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525718655.mount: Deactivated successfully. Mar 12 01:23:42.387368 containerd[1475]: time="2026-03-12T01:23:42.387088151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:42.388658 containerd[1475]: time="2026-03-12T01:23:42.388571630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 12 01:23:42.390504 containerd[1475]: time="2026-03-12T01:23:42.390416832Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:42.395359 containerd[1475]: time="2026-03-12T01:23:42.395208718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:42.396498 containerd[1475]: time="2026-03-12T01:23:42.396398215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 552.034585ms" Mar 12 01:23:42.396498 containerd[1475]: time="2026-03-12T01:23:42.396480625Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 12 01:23:42.400712 containerd[1475]: time="2026-03-12T01:23:42.400622578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 12 01:23:42.872982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347949666.mount: Deactivated successfully. Mar 12 01:23:45.708404 containerd[1475]: time="2026-03-12T01:23:45.707637899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:45.711139 containerd[1475]: time="2026-03-12T01:23:45.709311069Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 12 01:23:45.711216 containerd[1475]: time="2026-03-12T01:23:45.711145922Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:45.716363 containerd[1475]: time="2026-03-12T01:23:45.715979367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:23:45.717641 containerd[1475]: time="2026-03-12T01:23:45.717560724Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.316855557s" Mar 12 01:23:45.717641 containerd[1475]: time="2026-03-12T01:23:45.717590956Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 12 01:23:49.944137 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:23:49.958655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:23:49.998327 systemd[1]: Reloading requested from client PID 2068 ('systemctl') (unit session-7.scope)... Mar 12 01:23:49.998359 systemd[1]: Reloading... Mar 12 01:23:50.287032 zram_generator::config[2111]: No configuration found. Mar 12 01:23:50.702967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:23:50.815950 systemd[1]: Reloading finished in 817 ms. Mar 12 01:23:50.902214 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 01:23:50.902417 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 01:23:50.902841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:23:50.912741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:23:51.104522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:23:51.111912 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:23:51.236873 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:23:51.236873 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:23:51.237423 kubelet[2155]: I0312 01:23:51.236902 2155 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:23:51.841844 kubelet[2155]: I0312 01:23:51.841592 2155 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 01:23:51.841844 kubelet[2155]: I0312 01:23:51.841775 2155 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:23:51.842572 kubelet[2155]: I0312 01:23:51.842166 2155 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:23:51.842572 kubelet[2155]: I0312 01:23:51.842497 2155 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:23:51.842940 kubelet[2155]: I0312 01:23:51.842832 2155 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:23:51.883843 kubelet[2155]: E0312 01:23:51.883758 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:23:51.884594 kubelet[2155]: I0312 01:23:51.884453 2155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:23:51.895990 kubelet[2155]: E0312 01:23:51.895904 2155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:23:51.895990 kubelet[2155]: I0312 01:23:51.895977 2155 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:23:51.907642 kubelet[2155]: I0312 01:23:51.907517 2155 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:23:51.908744 kubelet[2155]: I0312 01:23:51.908625 2155 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:23:51.908948 kubelet[2155]: I0312 01:23:51.908684 2155 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:23:51.909455 kubelet[2155]: I0312 01:23:51.908966 2155 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:23:51.909455 kubelet[2155]: I0312 01:23:51.908979 2155 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 01:23:51.909455 kubelet[2155]: I0312 01:23:51.909166 2155 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:23:51.911372 kubelet[2155]: I0312 01:23:51.911231 2155 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:23:51.911908 kubelet[2155]: I0312 01:23:51.911854 2155 kubelet.go:475] "Attempting to sync node with API server" Mar 12 01:23:51.911968 kubelet[2155]: I0312 01:23:51.911920 2155 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:23:51.912010 kubelet[2155]: I0312 01:23:51.911986 2155 kubelet.go:387] "Adding apiserver pod source" Mar 12 01:23:51.912047 kubelet[2155]: I0312 01:23:51.912041 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:23:51.913558 kubelet[2155]: E0312 01:23:51.913469 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:23:51.913558 kubelet[2155]: E0312 01:23:51.913494 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:23:51.915933 kubelet[2155]: I0312 01:23:51.915892 2155 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:23:51.919319 kubelet[2155]: I0312 01:23:51.917428 2155 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:23:51.919319 kubelet[2155]: I0312 01:23:51.917477 2155 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:23:51.919319 kubelet[2155]: W0312 01:23:51.917672 2155 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:23:51.924186 kubelet[2155]: I0312 01:23:51.924131 2155 server.go:1262] "Started kubelet" Mar 12 01:23:51.924599 kubelet[2155]: I0312 01:23:51.924495 2155 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:23:51.925565 kubelet[2155]: I0312 01:23:51.925500 2155 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:23:51.925698 kubelet[2155]: I0312 01:23:51.925628 2155 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:23:51.926118 kubelet[2155]: I0312 01:23:51.926025 2155 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:23:51.926883 kubelet[2155]: I0312 01:23:51.926820 2155 server.go:310] "Adding debug handlers to kubelet server" Mar 12 01:23:51.927147 kubelet[2155]: I0312 01:23:51.927078 2155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:23:51.930083 kubelet[2155]: I0312 01:23:51.928830 2155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:23:51.933462 kubelet[2155]: E0312 01:23:51.932986 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:23:51.933462 kubelet[2155]: I0312 01:23:51.933083 2155 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 01:23:51.933612 kubelet[2155]: I0312 01:23:51.933493 2155 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:23:51.933715 kubelet[2155]: I0312 01:23:51.933658 2155 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:23:51.933864 kubelet[2155]: E0312 01:23:51.933777 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Mar 12 01:23:51.934198 kubelet[2155]: E0312 01:23:51.934119 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:23:51.934421 kubelet[2155]: E0312 01:23:51.933083 2155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf3853f9ff85f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:23:51.924070495 +0000 UTC m=+0.792266907,LastTimestamp:2026-03-12 01:23:51.924070495 +0000 UTC m=+0.792266907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:23:51.934775 kubelet[2155]: I0312 01:23:51.934618 2155 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:23:51.934775 kubelet[2155]: I0312 01:23:51.934743 2155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:23:51.935709 kubelet[2155]: E0312 01:23:51.935609 2155 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:23:51.936527 kubelet[2155]: I0312 01:23:51.936505 2155 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:23:51.961975 kubelet[2155]: I0312 01:23:51.961880 2155 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:23:51.961975 kubelet[2155]: I0312 01:23:51.961932 2155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:23:51.961975 kubelet[2155]: I0312 01:23:51.961957 2155 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:23:51.965122 kubelet[2155]: I0312 01:23:51.965053 2155 policy_none.go:49] "None policy: Start" Mar 12 01:23:51.965196 kubelet[2155]: I0312 01:23:51.965156 2155 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:23:51.965408 kubelet[2155]: I0312 01:23:51.965220 2155 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:23:51.968956 kubelet[2155]: I0312 01:23:51.968876 2155 policy_none.go:47] "Start" Mar 12 01:23:51.977096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:23:51.983163 kubelet[2155]: I0312 01:23:51.983048 2155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:23:51.986415 kubelet[2155]: I0312 01:23:51.985818 2155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:23:51.986415 kubelet[2155]: I0312 01:23:51.985897 2155 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 01:23:51.986415 kubelet[2155]: I0312 01:23:51.985997 2155 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 01:23:51.986415 kubelet[2155]: E0312 01:23:51.986084 2155 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:23:51.989734 kubelet[2155]: E0312 01:23:51.989664 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:23:51.997600 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:23:52.013215 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:23:52.016740 kubelet[2155]: E0312 01:23:52.016012 2155 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:23:52.016896 kubelet[2155]: I0312 01:23:52.016809 2155 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:23:52.016945 kubelet[2155]: I0312 01:23:52.016870 2155 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:23:52.018503 kubelet[2155]: I0312 01:23:52.018435 2155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:23:52.019876 kubelet[2155]: E0312 01:23:52.019818 2155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:23:52.020073 kubelet[2155]: E0312 01:23:52.020033 2155 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:23:52.103123 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 12 01:23:52.120011 kubelet[2155]: I0312 01:23:52.119830 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:23:52.120852 kubelet[2155]: E0312 01:23:52.120398 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 12 01:23:52.120852 kubelet[2155]: E0312 01:23:52.120726 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:52.123933 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 12 01:23:52.134931 kubelet[2155]: E0312 01:23:52.134818 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Mar 12 01:23:52.135105 kubelet[2155]: I0312 01:23:52.135069 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:52.135151 kubelet[2155]: I0312 01:23:52.135118 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:52.135184 kubelet[2155]: I0312 01:23:52.135145 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:52.135184 kubelet[2155]: I0312 01:23:52.135173 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:23:52.135353 kubelet[2155]: I0312 01:23:52.135196 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:52.136114 kubelet[2155]: I0312 01:23:52.135243 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:23:52.136182 kubelet[2155]: I0312 01:23:52.136134 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:23:52.136269 kubelet[2155]: I0312 01:23:52.136227 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:23:52.136410 kubelet[2155]: I0312 01:23:52.136350 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:52.137978 kubelet[2155]: E0312 01:23:52.137813 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:52.162049 systemd[1]: Created slice kubepods-burstable-poda0615b161db6e7302f4654e0a189e6aa.slice - libcontainer container kubepods-burstable-poda0615b161db6e7302f4654e0a189e6aa.slice. Mar 12 01:23:52.165974 kubelet[2155]: E0312 01:23:52.165789 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:52.396149 kubelet[2155]: I0312 01:23:52.392540 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:23:52.398190 kubelet[2155]: E0312 01:23:52.396855 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 12 01:23:52.443591 kubelet[2155]: E0312 01:23:52.442888 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:52.445627 kubelet[2155]: E0312 01:23:52.444598 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:52.450210 containerd[1475]: time="2026-03-12T01:23:52.450041105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 12 01:23:52.456238 containerd[1475]: time="2026-03-12T01:23:52.449990714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 12 01:23:52.479944 kubelet[2155]: E0312 01:23:52.479521 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:52.488329 containerd[1475]: time="2026-03-12T01:23:52.487528601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0615b161db6e7302f4654e0a189e6aa,Namespace:kube-system,Attempt:0,}" Mar 12 01:23:52.537575 kubelet[2155]: E0312 01:23:52.537014 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Mar 12 01:23:52.791541 kubelet[2155]: E0312 01:23:52.790940 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:23:52.815541 kubelet[2155]: E0312 01:23:52.815012 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:23:52.829292 kubelet[2155]: I0312 01:23:52.829187 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:23:52.839589 kubelet[2155]: E0312 01:23:52.839077 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 12 01:23:53.150445 kubelet[2155]: E0312 01:23:53.148099 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:23:53.222237 kubelet[2155]: E0312 01:23:53.220710 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:23:53.406770 kubelet[2155]: E0312 01:23:53.399235 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Mar 12 01:23:53.649884 kubelet[2155]: I0312 01:23:53.649463 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:23:53.649884 kubelet[2155]: E0312 01:23:53.650213 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 12 01:23:53.907723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503286460.mount: Deactivated successfully. Mar 12 01:23:54.022574 kubelet[2155]: E0312 01:23:54.020081 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:23:54.103109 containerd[1475]: time="2026-03-12T01:23:54.101948327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:23:54.107051 containerd[1475]: time="2026-03-12T01:23:54.104113178Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:23:54.107116 containerd[1475]: time="2026-03-12T01:23:54.107055085Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:23:54.109376 containerd[1475]: time="2026-03-12T01:23:54.109210791Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:23:54.110349 containerd[1475]: time="2026-03-12T01:23:54.110177020Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:23:54.111114 containerd[1475]: time="2026-03-12T01:23:54.111021218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:23:54.112814 containerd[1475]: time="2026-03-12T01:23:54.112702375Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:23:54.114796 containerd[1475]: time="2026-03-12T01:23:54.114676972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:23:54.116368 containerd[1475]: time="2026-03-12T01:23:54.116224582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.665675269s" Mar 12 01:23:54.119744 containerd[1475]: time="2026-03-12T01:23:54.119606201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.669080315s" Mar 12 01:23:54.121706 containerd[1475]: time="2026-03-12T01:23:54.121243005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.633212994s" Mar 12 01:23:54.846699 containerd[1475]: time="2026-03-12T01:23:54.845732311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:23:54.846699 containerd[1475]: time="2026-03-12T01:23:54.846018323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:23:54.846699 containerd[1475]: time="2026-03-12T01:23:54.846040715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:54.846699 containerd[1475]: time="2026-03-12T01:23:54.846236736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:54.867600 containerd[1475]: time="2026-03-12T01:23:54.867054065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:23:54.874707 containerd[1475]: time="2026-03-12T01:23:54.874569084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:23:54.874707 containerd[1475]: time="2026-03-12T01:23:54.874674732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:54.875186 containerd[1475]: time="2026-03-12T01:23:54.875069901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:54.890372 containerd[1475]: time="2026-03-12T01:23:54.888143809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:23:54.890372 containerd[1475]: time="2026-03-12T01:23:54.889934084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:23:54.890372 containerd[1475]: time="2026-03-12T01:23:54.889950331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:54.890372 containerd[1475]: time="2026-03-12T01:23:54.890126765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:23:55.098718 kubelet[2155]: E0312 01:23:55.096410 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="3.2s" Mar 12 01:23:55.121538 systemd[1]: Started cri-containerd-9f0edddf356a0f8d42761bd54d5bd1d6072774c5028fa8d036dc7678e2410fa6.scope - libcontainer container 9f0edddf356a0f8d42761bd54d5bd1d6072774c5028fa8d036dc7678e2410fa6. Mar 12 01:23:55.131513 systemd[1]: Started cri-containerd-276fca2a95fb33c01ae2bf9bb6d2603d3267cb40ce3e9bd4e5d3643254190217.scope - libcontainer container 276fca2a95fb33c01ae2bf9bb6d2603d3267cb40ce3e9bd4e5d3643254190217. Mar 12 01:23:55.135596 systemd[1]: Started cri-containerd-b6dba6c00857e2e9b041cee8a5084495821931c533c9dedffed23c5662eda283.scope - libcontainer container b6dba6c00857e2e9b041cee8a5084495821931c533c9dedffed23c5662eda283. Mar 12 01:23:55.274738 kubelet[2155]: I0312 01:23:55.274457 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:23:55.276099 kubelet[2155]: E0312 01:23:55.275880 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Mar 12 01:23:55.313404 containerd[1475]: time="2026-03-12T01:23:55.313342290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6dba6c00857e2e9b041cee8a5084495821931c533c9dedffed23c5662eda283\"" Mar 12 01:23:55.321008 kubelet[2155]: E0312 01:23:55.320872 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:55.321361 containerd[1475]: time="2026-03-12T01:23:55.321239006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0615b161db6e7302f4654e0a189e6aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"276fca2a95fb33c01ae2bf9bb6d2603d3267cb40ce3e9bd4e5d3643254190217\"" Mar 12 01:23:55.325932 kubelet[2155]: E0312 01:23:55.323490 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:55.333518 containerd[1475]: time="2026-03-12T01:23:55.333478937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f0edddf356a0f8d42761bd54d5bd1d6072774c5028fa8d036dc7678e2410fa6\"" Mar 12 01:23:55.335560 kubelet[2155]: E0312 01:23:55.335343 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:55.340416 kubelet[2155]: E0312 01:23:55.340388 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:23:55.384097 kubelet[2155]: E0312 01:23:55.383744 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:23:55.409378 containerd[1475]: time="2026-03-12T01:23:55.408887928Z" level=info msg="CreateContainer within sandbox \"9f0edddf356a0f8d42761bd54d5bd1d6072774c5028fa8d036dc7678e2410fa6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:23:55.409378 containerd[1475]: time="2026-03-12T01:23:55.408992046Z" level=info msg="CreateContainer within sandbox \"b6dba6c00857e2e9b041cee8a5084495821931c533c9dedffed23c5662eda283\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:23:55.410875 containerd[1475]: time="2026-03-12T01:23:55.410168957Z" level=info msg="CreateContainer within sandbox \"276fca2a95fb33c01ae2bf9bb6d2603d3267cb40ce3e9bd4e5d3643254190217\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:23:55.442066 containerd[1475]: time="2026-03-12T01:23:55.441977067Z" level=info msg="CreateContainer within sandbox \"b6dba6c00857e2e9b041cee8a5084495821931c533c9dedffed23c5662eda283\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f21e605cbafd4f0f63bbc6852609df4d97841a5e158de10c009a9c5422cf28f\"" Mar 12 01:23:55.445247 containerd[1475]: time="2026-03-12T01:23:55.443037264Z" level=info msg="StartContainer for \"3f21e605cbafd4f0f63bbc6852609df4d97841a5e158de10c009a9c5422cf28f\"" Mar 12 01:23:55.448685 containerd[1475]: time="2026-03-12T01:23:55.448646415Z" level=info msg="CreateContainer within sandbox \"276fca2a95fb33c01ae2bf9bb6d2603d3267cb40ce3e9bd4e5d3643254190217\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"72b8e1260058ae30d1afbc098182c1a5d01c0dbb4a7d8927968b8117187f9461\"" Mar 12 01:23:55.449571 containerd[1475]: time="2026-03-12T01:23:55.449471499Z" level=info msg="StartContainer for \"72b8e1260058ae30d1afbc098182c1a5d01c0dbb4a7d8927968b8117187f9461\"" Mar 12 01:23:55.451074 containerd[1475]: time="2026-03-12T01:23:55.451010747Z" level=info msg="CreateContainer within sandbox \"9f0edddf356a0f8d42761bd54d5bd1d6072774c5028fa8d036dc7678e2410fa6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6909f1ec9329d387bab5cb7debb5bc8910905e73567c99aaf35f8db65cb50212\"" Mar 12 01:23:55.452567 containerd[1475]: time="2026-03-12T01:23:55.452481632Z" level=info msg="StartContainer for \"6909f1ec9329d387bab5cb7debb5bc8910905e73567c99aaf35f8db65cb50212\"" Mar 12 01:23:55.492571 systemd[1]: Started cri-containerd-3f21e605cbafd4f0f63bbc6852609df4d97841a5e158de10c009a9c5422cf28f.scope - libcontainer container 3f21e605cbafd4f0f63bbc6852609df4d97841a5e158de10c009a9c5422cf28f. Mar 12 01:23:55.495305 systemd[1]: Started cri-containerd-72b8e1260058ae30d1afbc098182c1a5d01c0dbb4a7d8927968b8117187f9461.scope - libcontainer container 72b8e1260058ae30d1afbc098182c1a5d01c0dbb4a7d8927968b8117187f9461. Mar 12 01:23:55.507717 systemd[1]: Started cri-containerd-6909f1ec9329d387bab5cb7debb5bc8910905e73567c99aaf35f8db65cb50212.scope - libcontainer container 6909f1ec9329d387bab5cb7debb5bc8910905e73567c99aaf35f8db65cb50212. Mar 12 01:23:55.542402 kubelet[2155]: E0312 01:23:55.542344 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:23:55.673762 containerd[1475]: time="2026-03-12T01:23:55.672954726Z" level=info msg="StartContainer for \"72b8e1260058ae30d1afbc098182c1a5d01c0dbb4a7d8927968b8117187f9461\" returns successfully" Mar 12 01:23:55.690411 containerd[1475]: time="2026-03-12T01:23:55.690334715Z" level=info msg="StartContainer for \"3f21e605cbafd4f0f63bbc6852609df4d97841a5e158de10c009a9c5422cf28f\" returns successfully" Mar 12 01:23:55.698442 containerd[1475]: time="2026-03-12T01:23:55.698360079Z" level=info msg="StartContainer for \"6909f1ec9329d387bab5cb7debb5bc8910905e73567c99aaf35f8db65cb50212\" returns successfully" Mar 12 01:23:56.140695 kubelet[2155]: E0312 01:23:56.140595 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:56.145158 kubelet[2155]: E0312 01:23:56.140930 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:56.146362 kubelet[2155]: E0312 01:23:56.146236 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:56.146591 kubelet[2155]: E0312 01:23:56.146540 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:56.151694 kubelet[2155]: E0312 01:23:56.151623 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:56.151928 kubelet[2155]: E0312 01:23:56.151861 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:57.162646 kubelet[2155]: E0312 01:23:57.160510 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:57.162646 kubelet[2155]: E0312 01:23:57.160751 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:57.162646 kubelet[2155]: E0312 01:23:57.160754 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:57.162646 kubelet[2155]: E0312 01:23:57.160947 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:57.162646 kubelet[2155]: E0312 01:23:57.161245 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:57.162646 kubelet[2155]: E0312 01:23:57.161446 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:57.716170 update_engine[1457]: I20260312 01:23:57.715238 1457 update_attempter.cc:509] Updating boot flags... Mar 12 01:23:57.808435 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2451) Mar 12 01:23:57.985338 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2453) Mar 12 01:23:58.493524 kubelet[2155]: I0312 01:23:58.492209 2155 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:23:59.121519 kubelet[2155]: E0312 01:23:59.121093 2155 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:23:59.173921 kubelet[2155]: E0312 01:23:59.173214 2155 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:23:59.173921 kubelet[2155]: E0312 01:23:59.173569 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:59.352828 kubelet[2155]: I0312 01:23:59.352484 2155 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:23:59.434397 kubelet[2155]: I0312 01:23:59.433798 2155 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:23:59.440608 kubelet[2155]: E0312 01:23:59.440418 2155 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:23:59.440608 kubelet[2155]: I0312 01:23:59.440466 2155 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:23:59.442468 kubelet[2155]: E0312 01:23:59.442374 2155 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:23:59.442468 kubelet[2155]: I0312 01:23:59.442416 2155 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:59.444410 kubelet[2155]: E0312 01:23:59.444363 2155 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:59.674083 kubelet[2155]: I0312 01:23:59.673736 2155 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:59.676790 kubelet[2155]: E0312 01:23:59.676690 2155 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:23:59.677007 kubelet[2155]: E0312 01:23:59.676973 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:23:59.955058 kubelet[2155]: I0312 01:23:59.954640 2155 apiserver.go:52] "Watching apiserver" Mar 12 01:24:00.034715 kubelet[2155]: I0312 01:24:00.034615 2155 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:24:01.619965 systemd[1]: Reloading requested from client PID 2460 ('systemctl') (unit session-7.scope)... Mar 12 01:24:01.620000 systemd[1]: Reloading... Mar 12 01:24:01.746380 zram_generator::config[2499]: No configuration found. Mar 12 01:24:01.893801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:24:01.999154 systemd[1]: Reloading finished in 378 ms. Mar 12 01:24:02.060553 kubelet[2155]: I0312 01:24:02.060514 2155 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:24:02.060776 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:24:02.071909 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:24:02.072211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:24:02.072333 systemd[1]: kubelet.service: Consumed 3.921s CPU time, 128.9M memory peak, 0B memory swap peak. Mar 12 01:24:02.083728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:24:02.369506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:24:02.379788 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:24:02.463520 kubelet[2546]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:24:02.463520 kubelet[2546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:24:02.463965 kubelet[2546]: I0312 01:24:02.463518 2546 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:24:02.471794 kubelet[2546]: I0312 01:24:02.471736 2546 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 01:24:02.471794 kubelet[2546]: I0312 01:24:02.471776 2546 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:24:02.471888 kubelet[2546]: I0312 01:24:02.471803 2546 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:24:02.471888 kubelet[2546]: I0312 01:24:02.471810 2546 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:24:02.471962 kubelet[2546]: I0312 01:24:02.471955 2546 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:24:02.473388 kubelet[2546]: I0312 01:24:02.473339 2546 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:24:02.478071 kubelet[2546]: I0312 01:24:02.477980 2546 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:24:02.484328 kubelet[2546]: E0312 01:24:02.484216 2546 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:24:02.484457 kubelet[2546]: I0312 01:24:02.484446 2546 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:24:02.492842 kubelet[2546]: I0312 01:24:02.492764 2546 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:24:02.493111 kubelet[2546]: I0312 01:24:02.493048 2546 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:24:02.493329 kubelet[2546]: I0312 01:24:02.493105 2546 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:24:02.493443 kubelet[2546]: I0312 01:24:02.493376 2546 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:24:02.493443 kubelet[2546]: I0312 01:24:02.493388 2546 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 01:24:02.493501 kubelet[2546]: I0312 01:24:02.493463 2546 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:24:02.493825 kubelet[2546]: I0312 01:24:02.493776 2546 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:24:02.494056 kubelet[2546]: I0312 01:24:02.493977 2546 kubelet.go:475] "Attempting to sync node with API server" Mar 12 01:24:02.494056 kubelet[2546]: I0312 01:24:02.494026 2546 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:24:02.494229 kubelet[2546]: I0312 01:24:02.494178 2546 kubelet.go:387] "Adding apiserver pod source" Mar 12 01:24:02.494229 kubelet[2546]: I0312 01:24:02.494224 2546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:24:02.495599 kubelet[2546]: I0312 01:24:02.495490 2546 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:24:02.496980 kubelet[2546]: I0312 01:24:02.496924 2546 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:24:02.497188 kubelet[2546]: I0312 01:24:02.497051 2546 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:24:02.502340 kubelet[2546]: I0312 01:24:02.502230 2546 server.go:1262] "Started kubelet" Mar 12 01:24:02.505332 kubelet[2546]: I0312 01:24:02.503064 2546 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:24:02.505332 kubelet[2546]: I0312 01:24:02.503178 2546 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:24:02.505332 kubelet[2546]: I0312 01:24:02.503227 2546 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:24:02.505332 kubelet[2546]: I0312 01:24:02.503849 2546 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:24:02.505332 kubelet[2546]: I0312 01:24:02.504506 2546 server.go:310] "Adding debug handlers to kubelet server" Mar 12 01:24:02.515931 kubelet[2546]: I0312 01:24:02.514590 2546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:24:02.519772 kubelet[2546]: I0312 01:24:02.519654 2546 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:24:02.520686 kubelet[2546]: I0312 01:24:02.520620 2546 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 01:24:02.522136 kubelet[2546]: I0312 01:24:02.522080 2546 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:24:02.527597 kubelet[2546]: I0312 01:24:02.523890 2546 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:24:02.527597 kubelet[2546]: I0312 01:24:02.524573 2546 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:24:02.527597 kubelet[2546]: I0312 01:24:02.524648 2546 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:24:02.529238 kubelet[2546]: E0312 01:24:02.529218 2546 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:24:02.530164 kubelet[2546]: I0312 01:24:02.529835 2546 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:24:02.545035 kubelet[2546]: I0312 01:24:02.544959 2546 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:24:02.547053 kubelet[2546]: I0312 01:24:02.546993 2546 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:24:02.547053 kubelet[2546]: I0312 01:24:02.547055 2546 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 01:24:02.547162 kubelet[2546]: I0312 01:24:02.547081 2546 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 01:24:02.547162 kubelet[2546]: E0312 01:24:02.547135 2546 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:24:02.602426 kubelet[2546]: I0312 01:24:02.602354 2546 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:24:02.602426 kubelet[2546]: I0312 01:24:02.602402 2546 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:24:02.602426 kubelet[2546]: I0312 01:24:02.602429 2546 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:24:02.602834 kubelet[2546]: I0312 01:24:02.602651 2546 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 01:24:02.602834 kubelet[2546]: I0312 01:24:02.602765 2546 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 01:24:02.603629 kubelet[2546]: I0312 01:24:02.602880 2546 policy_none.go:49] "None policy: Start" Mar 12 01:24:02.603629 kubelet[2546]: I0312 01:24:02.602893 2546 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:24:02.603629 kubelet[2546]: I0312 01:24:02.602970 2546 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:24:02.603629 kubelet[2546]: I0312 01:24:02.603245 2546 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 01:24:02.603629 kubelet[2546]: I0312 01:24:02.603383 2546 policy_none.go:47] "Start" Mar 12 01:24:02.615890 kubelet[2546]: E0312 01:24:02.615812 2546 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:24:02.616303 kubelet[2546]: I0312 01:24:02.616134 2546 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:24:02.616303 kubelet[2546]: I0312 01:24:02.616173 2546 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:24:02.617099 kubelet[2546]: I0312 01:24:02.616578 2546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:24:02.621882 kubelet[2546]: E0312 01:24:02.621615 2546 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:24:02.649157 kubelet[2546]: I0312 01:24:02.649048 2546 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:24:02.652351 kubelet[2546]: I0312 01:24:02.649589 2546 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:24:02.652351 kubelet[2546]: I0312 01:24:02.649713 2546 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:24:02.776445 kubelet[2546]: I0312 01:24:02.775959 2546 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:24:02.793335 kubelet[2546]: I0312 01:24:02.791897 2546 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 12 01:24:02.793335 kubelet[2546]: I0312 01:24:02.792227 2546 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:24:02.831838 kubelet[2546]: I0312 01:24:02.831755 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:24:02.832242 kubelet[2546]: I0312 01:24:02.832055 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:24:02.832242 kubelet[2546]: I0312 01:24:02.832090 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:24:02.832242 kubelet[2546]: I0312 01:24:02.832108 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:24:02.832242 kubelet[2546]: I0312 01:24:02.832121 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:24:02.832242 kubelet[2546]: I0312 01:24:02.832135 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0615b161db6e7302f4654e0a189e6aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0615b161db6e7302f4654e0a189e6aa\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:24:02.832542 kubelet[2546]: I0312 01:24:02.832149 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:24:02.832542 kubelet[2546]: I0312 01:24:02.832163 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:24:02.832542 kubelet[2546]: I0312 01:24:02.832176 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:24:03.014534 kubelet[2546]: E0312 01:24:03.012072 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:03.060195 kubelet[2546]: E0312 01:24:03.059577 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:03.079045 kubelet[2546]: E0312 01:24:03.078846 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:03.497690 kubelet[2546]: I0312 01:24:03.495463 2546 apiserver.go:52] "Watching apiserver" Mar 12 01:24:03.529290 kubelet[2546]: I0312 01:24:03.528522 2546 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:24:03.593511 kubelet[2546]: I0312 01:24:03.592000 2546 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:24:03.593511 kubelet[2546]: I0312 01:24:03.592918 2546 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:24:03.593511 kubelet[2546]: E0312 01:24:03.596454 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:03.606603 kubelet[2546]: E0312 01:24:03.606561 2546 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 12 01:24:03.607516 kubelet[2546]: E0312 01:24:03.607116 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:03.615946 kubelet[2546]: E0312 01:24:03.615805 2546 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:24:03.616913 kubelet[2546]: E0312 01:24:03.616885 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:03.771843 kubelet[2546]: I0312 01:24:03.770700 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7706583839999999 podStartE2EDuration="1.770658384s" podCreationTimestamp="2026-03-12 01:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:24:03.752207052 +0000 UTC m=+1.366569931" watchObservedRunningTime="2026-03-12 01:24:03.770658384 +0000 UTC m=+1.385021263" Mar 12 01:24:03.771843 kubelet[2546]: I0312 01:24:03.771108 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.771098855 podStartE2EDuration="1.771098855s" podCreationTimestamp="2026-03-12 01:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:24:03.770371859 +0000 UTC m=+1.384734748" watchObservedRunningTime="2026-03-12 01:24:03.771098855 +0000 UTC m=+1.385461744" Mar 12 01:24:03.783618 kubelet[2546]: I0312 01:24:03.783497 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7834747439999998 podStartE2EDuration="1.783474744s" podCreationTimestamp="2026-03-12 01:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:24:03.783162646 +0000 UTC m=+1.397525535" watchObservedRunningTime="2026-03-12 01:24:03.783474744 +0000 UTC m=+1.397837653" Mar 12 01:24:04.594992 kubelet[2546]: E0312 01:24:04.594621 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:04.597895 kubelet[2546]: E0312 01:24:04.596557 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:05.618054 kubelet[2546]: E0312 01:24:05.617786 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:08.153395 kubelet[2546]: I0312 01:24:08.152968 2546 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:24:08.156698 kubelet[2546]: I0312 01:24:08.154198 2546 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:24:08.156791 containerd[1475]: time="2026-03-12T01:24:08.153776658Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:24:09.116699 kubelet[2546]: E0312 01:24:09.116415 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:09.229807 systemd[1]: Created slice kubepods-besteffort-pod4749ae04_b842_4185_a89d_9f3f84f5cc9a.slice - libcontainer container kubepods-besteffort-pod4749ae04_b842_4185_a89d_9f3f84f5cc9a.slice. Mar 12 01:24:09.340503 kubelet[2546]: I0312 01:24:09.340229 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4749ae04-b842-4185-a89d-9f3f84f5cc9a-kube-proxy\") pod \"kube-proxy-785fh\" (UID: \"4749ae04-b842-4185-a89d-9f3f84f5cc9a\") " pod="kube-system/kube-proxy-785fh" Mar 12 01:24:09.340503 kubelet[2546]: I0312 01:24:09.340431 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4749ae04-b842-4185-a89d-9f3f84f5cc9a-xtables-lock\") pod \"kube-proxy-785fh\" (UID: \"4749ae04-b842-4185-a89d-9f3f84f5cc9a\") " pod="kube-system/kube-proxy-785fh" Mar 12 01:24:09.340503 kubelet[2546]: I0312 01:24:09.340490 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4749ae04-b842-4185-a89d-9f3f84f5cc9a-lib-modules\") pod \"kube-proxy-785fh\" (UID: \"4749ae04-b842-4185-a89d-9f3f84f5cc9a\") " pod="kube-system/kube-proxy-785fh" Mar 12 01:24:09.340503 kubelet[2546]: I0312 01:24:09.340519 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwswk\" (UniqueName: \"kubernetes.io/projected/4749ae04-b842-4185-a89d-9f3f84f5cc9a-kube-api-access-rwswk\") pod \"kube-proxy-785fh\" (UID: \"4749ae04-b842-4185-a89d-9f3f84f5cc9a\") " pod="kube-system/kube-proxy-785fh" Mar 12 01:24:09.404150 systemd[1]: Created slice kubepods-besteffort-pod9d6ad5bc_4838_42fa_ad36_390a27b16310.slice - libcontainer container kubepods-besteffort-pod9d6ad5bc_4838_42fa_ad36_390a27b16310.slice. Mar 12 01:24:09.420216 kubelet[2546]: E0312 01:24:09.420153 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:09.441481 kubelet[2546]: I0312 01:24:09.441377 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzvt6\" (UniqueName: \"kubernetes.io/projected/9d6ad5bc-4838-42fa-ad36-390a27b16310-kube-api-access-wzvt6\") pod \"tigera-operator-5588576f44-f6l5j\" (UID: \"9d6ad5bc-4838-42fa-ad36-390a27b16310\") " pod="tigera-operator/tigera-operator-5588576f44-f6l5j" Mar 12 01:24:09.441481 kubelet[2546]: I0312 01:24:09.441412 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d6ad5bc-4838-42fa-ad36-390a27b16310-var-lib-calico\") pod \"tigera-operator-5588576f44-f6l5j\" (UID: \"9d6ad5bc-4838-42fa-ad36-390a27b16310\") " pod="tigera-operator/tigera-operator-5588576f44-f6l5j" Mar 12 01:24:09.548409 kubelet[2546]: E0312 01:24:09.547459 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:09.550211 containerd[1475]: time="2026-03-12T01:24:09.550046638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-785fh,Uid:4749ae04-b842-4185-a89d-9f3f84f5cc9a,Namespace:kube-system,Attempt:0,}" Mar 12 01:24:09.627523 kubelet[2546]: E0312 01:24:09.627453 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:09.628235 kubelet[2546]: E0312 01:24:09.628121 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:09.792046 containerd[1475]: time="2026-03-12T01:24:09.791748726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-f6l5j,Uid:9d6ad5bc-4838-42fa-ad36-390a27b16310,Namespace:tigera-operator,Attempt:0,}" Mar 12 01:24:09.794581 containerd[1475]: time="2026-03-12T01:24:09.794230706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:24:09.794581 containerd[1475]: time="2026-03-12T01:24:09.794541523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:24:09.794782 containerd[1475]: time="2026-03-12T01:24:09.794704207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:09.795538 containerd[1475]: time="2026-03-12T01:24:09.795063784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:09.856578 systemd[1]: Started cri-containerd-09cc6964d3bf6898ea6dc117e167fb56004e287becdd6624c07b73158471b43d.scope - libcontainer container 09cc6964d3bf6898ea6dc117e167fb56004e287becdd6624c07b73158471b43d. Mar 12 01:24:09.877600 containerd[1475]: time="2026-03-12T01:24:09.877228936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:24:09.877914 containerd[1475]: time="2026-03-12T01:24:09.877845237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:24:09.878084 containerd[1475]: time="2026-03-12T01:24:09.878030978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:09.878329 containerd[1475]: time="2026-03-12T01:24:09.878222792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:09.964924 systemd[1]: Started cri-containerd-01073576fd9681996ab8bc5359d4d0312fc1d1d59d459f0a8f4ddbf7dc9804c4.scope - libcontainer container 01073576fd9681996ab8bc5359d4d0312fc1d1d59d459f0a8f4ddbf7dc9804c4. Mar 12 01:24:09.974698 containerd[1475]: time="2026-03-12T01:24:09.974659065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-785fh,Uid:4749ae04-b842-4185-a89d-9f3f84f5cc9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"09cc6964d3bf6898ea6dc117e167fb56004e287becdd6624c07b73158471b43d\"" Mar 12 01:24:09.976627 kubelet[2546]: E0312 01:24:09.976503 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:09.985138 containerd[1475]: time="2026-03-12T01:24:09.984980443Z" level=info msg="CreateContainer within sandbox \"09cc6964d3bf6898ea6dc117e167fb56004e287becdd6624c07b73158471b43d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:24:10.020992 containerd[1475]: time="2026-03-12T01:24:10.020757367Z" level=info msg="CreateContainer within sandbox \"09cc6964d3bf6898ea6dc117e167fb56004e287becdd6624c07b73158471b43d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c72dff8008f27813a00b00b4ca19d5c5aa39dfdb57dc6b5bf1b83b893e33ff6d\"" Mar 12 01:24:10.024229 containerd[1475]: time="2026-03-12T01:24:10.024180703Z" level=info msg="StartContainer for \"c72dff8008f27813a00b00b4ca19d5c5aa39dfdb57dc6b5bf1b83b893e33ff6d\"" Mar 12 01:24:10.054594 containerd[1475]: time="2026-03-12T01:24:10.054235175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-f6l5j,Uid:9d6ad5bc-4838-42fa-ad36-390a27b16310,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"01073576fd9681996ab8bc5359d4d0312fc1d1d59d459f0a8f4ddbf7dc9804c4\"" Mar 12 01:24:10.058554 containerd[1475]: time="2026-03-12T01:24:10.058245100Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 01:24:10.120756 systemd[1]: Started cri-containerd-c72dff8008f27813a00b00b4ca19d5c5aa39dfdb57dc6b5bf1b83b893e33ff6d.scope - libcontainer container c72dff8008f27813a00b00b4ca19d5c5aa39dfdb57dc6b5bf1b83b893e33ff6d. Mar 12 01:24:10.168020 containerd[1475]: time="2026-03-12T01:24:10.167694761Z" level=info msg="StartContainer for \"c72dff8008f27813a00b00b4ca19d5c5aa39dfdb57dc6b5bf1b83b893e33ff6d\" returns successfully" Mar 12 01:24:10.655058 kubelet[2546]: E0312 01:24:10.654788 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:10.703076 kubelet[2546]: I0312 01:24:10.702825 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-785fh" podStartSLOduration=1.7027643019999998 podStartE2EDuration="1.702764302s" podCreationTimestamp="2026-03-12 01:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:24:10.701348025 +0000 UTC m=+8.315710904" watchObservedRunningTime="2026-03-12 01:24:10.702764302 +0000 UTC m=+8.317127191" Mar 12 01:24:11.060416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819708841.mount: Deactivated successfully. Mar 12 01:24:18.643050 kubelet[2546]: E0312 01:24:18.642737 2546 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.179s" Mar 12 01:24:18.682014 kubelet[2546]: E0312 01:24:18.680852 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:19.466333 containerd[1475]: time="2026-03-12T01:24:19.464983984Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 01:24:19.466333 containerd[1475]: time="2026-03-12T01:24:19.464858190Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:19.472601 containerd[1475]: time="2026-03-12T01:24:19.471982026Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:19.487138 containerd[1475]: time="2026-03-12T01:24:19.486591410Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:19.489203 containerd[1475]: time="2026-03-12T01:24:19.488138832Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 9.42976509s" Mar 12 01:24:19.489203 containerd[1475]: time="2026-03-12T01:24:19.488369685Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 01:24:19.501702 containerd[1475]: time="2026-03-12T01:24:19.501504253Z" level=info msg="CreateContainer within sandbox \"01073576fd9681996ab8bc5359d4d0312fc1d1d59d459f0a8f4ddbf7dc9804c4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 01:24:19.526821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799914936.mount: Deactivated successfully. Mar 12 01:24:19.531716 containerd[1475]: time="2026-03-12T01:24:19.531640199Z" level=info msg="CreateContainer within sandbox \"01073576fd9681996ab8bc5359d4d0312fc1d1d59d459f0a8f4ddbf7dc9804c4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9b240adfc6c0d28ed0d5a167a86db3a93fe7bb327a071f5812e9d6962d85a27b\"" Mar 12 01:24:19.534755 containerd[1475]: time="2026-03-12T01:24:19.532955584Z" level=info msg="StartContainer for \"9b240adfc6c0d28ed0d5a167a86db3a93fe7bb327a071f5812e9d6962d85a27b\"" Mar 12 01:24:19.632467 systemd[1]: Started cri-containerd-9b240adfc6c0d28ed0d5a167a86db3a93fe7bb327a071f5812e9d6962d85a27b.scope - libcontainer container 9b240adfc6c0d28ed0d5a167a86db3a93fe7bb327a071f5812e9d6962d85a27b. Mar 12 01:24:19.653819 kubelet[2546]: E0312 01:24:19.653748 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:19.726878 containerd[1475]: time="2026-03-12T01:24:19.726713156Z" level=info msg="StartContainer for \"9b240adfc6c0d28ed0d5a167a86db3a93fe7bb327a071f5812e9d6962d85a27b\" returns successfully" Mar 12 01:24:20.704087 kubelet[2546]: I0312 01:24:20.703680 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-f6l5j" podStartSLOduration=2.26674525 podStartE2EDuration="11.703567158s" podCreationTimestamp="2026-03-12 01:24:09 +0000 UTC" firstStartedPulling="2026-03-12 01:24:10.057631168 +0000 UTC m=+7.671994067" lastFinishedPulling="2026-03-12 01:24:19.494453095 +0000 UTC m=+17.108815975" observedRunningTime="2026-03-12 01:24:20.702371171 +0000 UTC m=+18.316734049" watchObservedRunningTime="2026-03-12 01:24:20.703567158 +0000 UTC m=+18.317930038" Mar 12 01:24:26.821434 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 12 01:24:26.834510 sshd[1644]: pam_unix(sshd:session): session closed for user core Mar 12 01:24:26.845072 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:49896.service: Deactivated successfully. Mar 12 01:24:26.846714 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:24:26.849086 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:24:26.851345 systemd[1]: session-7.scope: Consumed 17.461s CPU time, 161.5M memory peak, 0B memory swap peak. Mar 12 01:24:26.858978 systemd-logind[1453]: Removed session 7. Mar 12 01:24:29.094938 systemd[1]: Created slice kubepods-besteffort-pod85c2ed69_6ad5_43a3_ba2c_78e86eeb83b4.slice - libcontainer container kubepods-besteffort-pod85c2ed69_6ad5_43a3_ba2c_78e86eeb83b4.slice. Mar 12 01:24:29.100352 systemd[1]: Created slice kubepods-besteffort-pode9a38a06_65d1_42e0_bd2c_e7eec2f71543.slice - libcontainer container kubepods-besteffort-pode9a38a06_65d1_42e0_bd2c_e7eec2f71543.slice. Mar 12 01:24:29.139790 kubelet[2546]: I0312 01:24:29.139589 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-flexvol-driver-host\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.139790 kubelet[2546]: I0312 01:24:29.139681 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-sys-fs\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.139790 kubelet[2546]: I0312 01:24:29.139699 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-var-run-calico\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.139790 kubelet[2546]: I0312 01:24:29.139715 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-cni-bin-dir\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.139790 kubelet[2546]: I0312 01:24:29.139728 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-nodeproc\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140677 kubelet[2546]: I0312 01:24:29.139742 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kjp\" (UniqueName: \"kubernetes.io/projected/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-kube-api-access-k6kjp\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140677 kubelet[2546]: I0312 01:24:29.139859 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-lib-modules\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140677 kubelet[2546]: I0312 01:24:29.139933 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4-tigera-ca-bundle\") pod \"calico-typha-6d7df569f9-rmb5q\" (UID: \"85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4\") " pod="calico-system/calico-typha-6d7df569f9-rmb5q" Mar 12 01:24:29.140677 kubelet[2546]: I0312 01:24:29.139964 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qclv\" (UniqueName: \"kubernetes.io/projected/85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4-kube-api-access-5qclv\") pod \"calico-typha-6d7df569f9-rmb5q\" (UID: \"85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4\") " pod="calico-system/calico-typha-6d7df569f9-rmb5q" Mar 12 01:24:29.140677 kubelet[2546]: I0312 01:24:29.139980 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-var-lib-calico\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140923 kubelet[2546]: I0312 01:24:29.140002 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-node-certs\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140923 kubelet[2546]: I0312 01:24:29.140015 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-policysync\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140923 kubelet[2546]: I0312 01:24:29.140033 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-cni-net-dir\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140923 kubelet[2546]: I0312 01:24:29.140054 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-tigera-ca-bundle\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.140923 kubelet[2546]: I0312 01:24:29.140079 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-xtables-lock\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.141068 kubelet[2546]: I0312 01:24:29.140144 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4-typha-certs\") pod \"calico-typha-6d7df569f9-rmb5q\" (UID: \"85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4\") " pod="calico-system/calico-typha-6d7df569f9-rmb5q" Mar 12 01:24:29.141068 kubelet[2546]: I0312 01:24:29.140165 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-cni-log-dir\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.141068 kubelet[2546]: I0312 01:24:29.140188 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e9a38a06-65d1-42e0-bd2c-e7eec2f71543-bpffs\") pod \"calico-node-66fjw\" (UID: \"e9a38a06-65d1-42e0-bd2c-e7eec2f71543\") " pod="calico-system/calico-node-66fjw" Mar 12 01:24:29.195231 kubelet[2546]: E0312 01:24:29.194370 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:29.241059 kubelet[2546]: I0312 01:24:29.240932 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ed55f459-74a9-4d53-811f-2b6098967bb3-socket-dir\") pod \"csi-node-driver-s5nns\" (UID: \"ed55f459-74a9-4d53-811f-2b6098967bb3\") " pod="calico-system/csi-node-driver-s5nns" Mar 12 01:24:29.241059 kubelet[2546]: I0312 01:24:29.241043 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ed55f459-74a9-4d53-811f-2b6098967bb3-varrun\") pod \"csi-node-driver-s5nns\" (UID: \"ed55f459-74a9-4d53-811f-2b6098967bb3\") " pod="calico-system/csi-node-driver-s5nns" Mar 12 01:24:29.241470 kubelet[2546]: I0312 01:24:29.241181 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ed55f459-74a9-4d53-811f-2b6098967bb3-registration-dir\") pod \"csi-node-driver-s5nns\" (UID: \"ed55f459-74a9-4d53-811f-2b6098967bb3\") " pod="calico-system/csi-node-driver-s5nns" Mar 12 01:24:29.241470 kubelet[2546]: I0312 01:24:29.241217 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmsd\" (UniqueName: \"kubernetes.io/projected/ed55f459-74a9-4d53-811f-2b6098967bb3-kube-api-access-npmsd\") pod \"csi-node-driver-s5nns\" (UID: \"ed55f459-74a9-4d53-811f-2b6098967bb3\") " pod="calico-system/csi-node-driver-s5nns" Mar 12 01:24:29.241470 kubelet[2546]: I0312 01:24:29.241372 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed55f459-74a9-4d53-811f-2b6098967bb3-kubelet-dir\") pod \"csi-node-driver-s5nns\" (UID: \"ed55f459-74a9-4d53-811f-2b6098967bb3\") " pod="calico-system/csi-node-driver-s5nns" Mar 12 01:24:29.266359 kubelet[2546]: E0312 01:24:29.266103 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.266558 kubelet[2546]: W0312 01:24:29.266406 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.266970 kubelet[2546]: E0312 01:24:29.266858 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.273781 kubelet[2546]: E0312 01:24:29.272641 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.273781 kubelet[2546]: W0312 01:24:29.272662 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.273781 kubelet[2546]: E0312 01:24:29.272685 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.273781 kubelet[2546]: E0312 01:24:29.273084 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.273781 kubelet[2546]: W0312 01:24:29.273095 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.273781 kubelet[2546]: E0312 01:24:29.273106 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.290068 kubelet[2546]: E0312 01:24:29.289957 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.290068 kubelet[2546]: W0312 01:24:29.290028 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.290068 kubelet[2546]: E0312 01:24:29.290064 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.342782 kubelet[2546]: E0312 01:24:29.342654 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.342782 kubelet[2546]: W0312 01:24:29.342695 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.342782 kubelet[2546]: E0312 01:24:29.342716 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.343295 kubelet[2546]: E0312 01:24:29.343207 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.343345 kubelet[2546]: W0312 01:24:29.343246 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.343345 kubelet[2546]: E0312 01:24:29.343316 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.343771 kubelet[2546]: E0312 01:24:29.343673 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.343771 kubelet[2546]: W0312 01:24:29.343706 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.343771 kubelet[2546]: E0312 01:24:29.343717 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.344235 kubelet[2546]: E0312 01:24:29.344200 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.344344 kubelet[2546]: W0312 01:24:29.344238 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.344344 kubelet[2546]: E0312 01:24:29.344322 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.344832 kubelet[2546]: E0312 01:24:29.344793 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.344881 kubelet[2546]: W0312 01:24:29.344837 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.344881 kubelet[2546]: E0312 01:24:29.344853 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.345506 kubelet[2546]: E0312 01:24:29.345402 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.345506 kubelet[2546]: W0312 01:24:29.345434 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.345506 kubelet[2546]: E0312 01:24:29.345446 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.345885 kubelet[2546]: E0312 01:24:29.345853 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.345929 kubelet[2546]: W0312 01:24:29.345886 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.345929 kubelet[2546]: E0312 01:24:29.345898 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.346502 kubelet[2546]: E0312 01:24:29.346464 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.346502 kubelet[2546]: W0312 01:24:29.346500 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.347007 kubelet[2546]: E0312 01:24:29.346513 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.347132 kubelet[2546]: E0312 01:24:29.347095 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.347132 kubelet[2546]: W0312 01:24:29.347128 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.347180 kubelet[2546]: E0312 01:24:29.347141 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.347651 kubelet[2546]: E0312 01:24:29.347617 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.347651 kubelet[2546]: W0312 01:24:29.347649 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.347787 kubelet[2546]: E0312 01:24:29.347661 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.348088 kubelet[2546]: E0312 01:24:29.348055 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.348088 kubelet[2546]: W0312 01:24:29.348086 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.348142 kubelet[2546]: E0312 01:24:29.348098 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.348512 kubelet[2546]: E0312 01:24:29.348480 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.348550 kubelet[2546]: W0312 01:24:29.348516 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.348550 kubelet[2546]: E0312 01:24:29.348528 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.349089 kubelet[2546]: E0312 01:24:29.349050 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.349122 kubelet[2546]: W0312 01:24:29.349094 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.349122 kubelet[2546]: E0312 01:24:29.349114 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.349668 kubelet[2546]: E0312 01:24:29.349631 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.349701 kubelet[2546]: W0312 01:24:29.349671 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.349701 kubelet[2546]: E0312 01:24:29.349688 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.350304 kubelet[2546]: E0312 01:24:29.350212 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.350352 kubelet[2546]: W0312 01:24:29.350329 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.350352 kubelet[2546]: E0312 01:24:29.350347 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.350829 kubelet[2546]: E0312 01:24:29.350792 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.350871 kubelet[2546]: W0312 01:24:29.350834 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.350871 kubelet[2546]: E0312 01:24:29.350850 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.351360 kubelet[2546]: E0312 01:24:29.351234 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.351360 kubelet[2546]: W0312 01:24:29.351329 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.351360 kubelet[2546]: E0312 01:24:29.351340 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.351669 kubelet[2546]: E0312 01:24:29.351633 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.351701 kubelet[2546]: W0312 01:24:29.351674 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.351701 kubelet[2546]: E0312 01:24:29.351691 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.352184 kubelet[2546]: E0312 01:24:29.352132 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.352184 kubelet[2546]: W0312 01:24:29.352165 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.352184 kubelet[2546]: E0312 01:24:29.352176 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.352683 kubelet[2546]: E0312 01:24:29.352633 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.352683 kubelet[2546]: W0312 01:24:29.352664 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.352683 kubelet[2546]: E0312 01:24:29.352674 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.353185 kubelet[2546]: E0312 01:24:29.353155 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.353185 kubelet[2546]: W0312 01:24:29.353185 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.353306 kubelet[2546]: E0312 01:24:29.353195 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.353695 kubelet[2546]: E0312 01:24:29.353631 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.353695 kubelet[2546]: W0312 01:24:29.353662 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.353695 kubelet[2546]: E0312 01:24:29.353678 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.354166 kubelet[2546]: E0312 01:24:29.354137 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.354166 kubelet[2546]: W0312 01:24:29.354166 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.354374 kubelet[2546]: E0312 01:24:29.354176 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.354786 kubelet[2546]: E0312 01:24:29.354686 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.354786 kubelet[2546]: W0312 01:24:29.354721 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.354786 kubelet[2546]: E0312 01:24:29.354775 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.355141 kubelet[2546]: E0312 01:24:29.355076 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.355141 kubelet[2546]: W0312 01:24:29.355118 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.355141 kubelet[2546]: E0312 01:24:29.355133 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.367612 kubelet[2546]: E0312 01:24:29.367548 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:24:29.367612 kubelet[2546]: W0312 01:24:29.367596 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:24:29.367850 kubelet[2546]: E0312 01:24:29.367619 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:24:29.414479 containerd[1475]: time="2026-03-12T01:24:29.414404723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-66fjw,Uid:e9a38a06-65d1-42e0-bd2c-e7eec2f71543,Namespace:calico-system,Attempt:0,}" Mar 12 01:24:29.416503 kubelet[2546]: E0312 01:24:29.416458 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:29.417056 containerd[1475]: time="2026-03-12T01:24:29.416974167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d7df569f9-rmb5q,Uid:85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4,Namespace:calico-system,Attempt:0,}" Mar 12 01:24:29.466770 containerd[1475]: time="2026-03-12T01:24:29.464959960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:24:29.466770 containerd[1475]: time="2026-03-12T01:24:29.465232376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:24:29.466770 containerd[1475]: time="2026-03-12T01:24:29.465246731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:29.466770 containerd[1475]: time="2026-03-12T01:24:29.465967733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:29.487877 containerd[1475]: time="2026-03-12T01:24:29.487685154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:24:29.487877 containerd[1475]: time="2026-03-12T01:24:29.487840720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:24:29.488238 containerd[1475]: time="2026-03-12T01:24:29.487855516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:29.488238 containerd[1475]: time="2026-03-12T01:24:29.487962849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:24:29.500474 systemd[1]: Started cri-containerd-c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382.scope - libcontainer container c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382. Mar 12 01:24:29.531450 systemd[1]: Started cri-containerd-03c21f7faf2f197161599030855fa0574d1628a2a69d6295160ea7b28ab4e509.scope - libcontainer container 03c21f7faf2f197161599030855fa0574d1628a2a69d6295160ea7b28ab4e509. Mar 12 01:24:29.575051 containerd[1475]: time="2026-03-12T01:24:29.574825281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-66fjw,Uid:e9a38a06-65d1-42e0-bd2c-e7eec2f71543,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\"" Mar 12 01:24:29.579939 containerd[1475]: time="2026-03-12T01:24:29.579790094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 01:24:29.618594 containerd[1475]: time="2026-03-12T01:24:29.617567553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d7df569f9-rmb5q,Uid:85c2ed69-6ad5-43a3-ba2c-78e86eeb83b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"03c21f7faf2f197161599030855fa0574d1628a2a69d6295160ea7b28ab4e509\"" Mar 12 01:24:29.620075 kubelet[2546]: E0312 01:24:29.619405 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:30.320403 containerd[1475]: time="2026-03-12T01:24:30.320320782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:30.322007 containerd[1475]: time="2026-03-12T01:24:30.321892223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 12 01:24:30.323777 containerd[1475]: time="2026-03-12T01:24:30.323660626Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:30.328216 containerd[1475]: time="2026-03-12T01:24:30.328108529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:30.329960 containerd[1475]: time="2026-03-12T01:24:30.329871474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 749.994303ms" Mar 12 01:24:30.330095 containerd[1475]: time="2026-03-12T01:24:30.329964533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 01:24:30.332817 containerd[1475]: time="2026-03-12T01:24:30.332536525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 01:24:30.337326 containerd[1475]: time="2026-03-12T01:24:30.337186053Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 01:24:30.379866 containerd[1475]: time="2026-03-12T01:24:30.379703062Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b\"" Mar 12 01:24:30.380935 containerd[1475]: time="2026-03-12T01:24:30.380679768Z" level=info msg="StartContainer for \"35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b\"" Mar 12 01:24:30.446467 systemd[1]: Started cri-containerd-35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b.scope - libcontainer container 35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b. Mar 12 01:24:30.516865 containerd[1475]: time="2026-03-12T01:24:30.516824545Z" level=info msg="StartContainer for \"35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b\" returns successfully" Mar 12 01:24:30.526917 systemd[1]: cri-containerd-35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b.scope: Deactivated successfully. Mar 12 01:24:30.547985 kubelet[2546]: E0312 01:24:30.547876 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:30.596125 containerd[1475]: time="2026-03-12T01:24:30.595782827Z" level=info msg="shim disconnected" id=35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b namespace=k8s.io Mar 12 01:24:30.596125 containerd[1475]: time="2026-03-12T01:24:30.595992496Z" level=warning msg="cleaning up after shim disconnected" id=35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b namespace=k8s.io Mar 12 01:24:30.596125 containerd[1475]: time="2026-03-12T01:24:30.596027586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:31.258343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35d1351d8a0113c6aab84dc09f2b0b5dd6ac701dad90b958f74b53ca067fe52b-rootfs.mount: Deactivated successfully. Mar 12 01:24:32.548929 kubelet[2546]: E0312 01:24:32.548745 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:33.578202 containerd[1475]: time="2026-03-12T01:24:33.578038896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:33.579527 containerd[1475]: time="2026-03-12T01:24:33.579398252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 12 01:24:33.580849 containerd[1475]: time="2026-03-12T01:24:33.580771380Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:33.583649 containerd[1475]: time="2026-03-12T01:24:33.583552799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:33.584570 containerd[1475]: time="2026-03-12T01:24:33.584513672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.251944863s" Mar 12 01:24:33.584657 containerd[1475]: time="2026-03-12T01:24:33.584573106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 01:24:33.588364 containerd[1475]: time="2026-03-12T01:24:33.587644528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 01:24:33.607795 containerd[1475]: time="2026-03-12T01:24:33.607754338Z" level=info msg="CreateContainer within sandbox \"03c21f7faf2f197161599030855fa0574d1628a2a69d6295160ea7b28ab4e509\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 01:24:33.629791 containerd[1475]: time="2026-03-12T01:24:33.629703713Z" level=info msg="CreateContainer within sandbox \"03c21f7faf2f197161599030855fa0574d1628a2a69d6295160ea7b28ab4e509\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e8fa60957783f6b50ccef8c510d88bf12f8b23b6300226b083e0fc977b614a54\"" Mar 12 01:24:33.633216 containerd[1475]: time="2026-03-12T01:24:33.630636353Z" level=info msg="StartContainer for \"e8fa60957783f6b50ccef8c510d88bf12f8b23b6300226b083e0fc977b614a54\"" Mar 12 01:24:33.681635 systemd[1]: Started cri-containerd-e8fa60957783f6b50ccef8c510d88bf12f8b23b6300226b083e0fc977b614a54.scope - libcontainer container e8fa60957783f6b50ccef8c510d88bf12f8b23b6300226b083e0fc977b614a54. Mar 12 01:24:33.830163 containerd[1475]: time="2026-03-12T01:24:33.829962206Z" level=info msg="StartContainer for \"e8fa60957783f6b50ccef8c510d88bf12f8b23b6300226b083e0fc977b614a54\" returns successfully" Mar 12 01:24:34.548898 kubelet[2546]: E0312 01:24:34.548783 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:34.759059 kubelet[2546]: E0312 01:24:34.758876 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:34.778116 kubelet[2546]: I0312 01:24:34.777970 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d7df569f9-rmb5q" podStartSLOduration=2.81152125 podStartE2EDuration="6.777908221s" podCreationTimestamp="2026-03-12 01:24:28 +0000 UTC" firstStartedPulling="2026-03-12 01:24:29.620453379 +0000 UTC m=+27.234816258" lastFinishedPulling="2026-03-12 01:24:33.58684035 +0000 UTC m=+31.201203229" observedRunningTime="2026-03-12 01:24:34.776416453 +0000 UTC m=+32.390779362" watchObservedRunningTime="2026-03-12 01:24:34.777908221 +0000 UTC m=+32.392271101" Mar 12 01:24:35.770904 kubelet[2546]: E0312 01:24:35.770225 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:36.552434 kubelet[2546]: E0312 01:24:36.551514 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:36.773741 kubelet[2546]: E0312 01:24:36.773535 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:24:38.550230 kubelet[2546]: E0312 01:24:38.548827 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:40.551191 kubelet[2546]: E0312 01:24:40.550699 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:42.551931 kubelet[2546]: E0312 01:24:42.551551 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:44.553867 kubelet[2546]: E0312 01:24:44.553658 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:46.549133 kubelet[2546]: E0312 01:24:46.548942 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:48.550012 kubelet[2546]: E0312 01:24:48.549790 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:49.815416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162869806.mount: Deactivated successfully. Mar 12 01:24:50.026873 containerd[1475]: time="2026-03-12T01:24:50.026669573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 01:24:50.035193 containerd[1475]: time="2026-03-12T01:24:50.035084049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 16.447406254s" Mar 12 01:24:50.035193 containerd[1475]: time="2026-03-12T01:24:50.035180410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 01:24:50.041394 containerd[1475]: time="2026-03-12T01:24:50.040928507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:50.042846 containerd[1475]: time="2026-03-12T01:24:50.042635788Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:50.044156 containerd[1475]: time="2026-03-12T01:24:50.043999499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:50.046333 containerd[1475]: time="2026-03-12T01:24:50.046118183Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 01:24:50.077992 containerd[1475]: time="2026-03-12T01:24:50.077761577Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171\"" Mar 12 01:24:50.079873 containerd[1475]: time="2026-03-12T01:24:50.079723521Z" level=info msg="StartContainer for \"e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171\"" Mar 12 01:24:50.227733 systemd[1]: Started cri-containerd-e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171.scope - libcontainer container e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171. Mar 12 01:24:50.323816 containerd[1475]: time="2026-03-12T01:24:50.323660790Z" level=info msg="StartContainer for \"e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171\" returns successfully" Mar 12 01:24:50.421511 systemd[1]: cri-containerd-e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171.scope: Deactivated successfully. Mar 12 01:24:50.512205 containerd[1475]: time="2026-03-12T01:24:50.512070524Z" level=info msg="shim disconnected" id=e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171 namespace=k8s.io Mar 12 01:24:50.512205 containerd[1475]: time="2026-03-12T01:24:50.512203971Z" level=warning msg="cleaning up after shim disconnected" id=e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171 namespace=k8s.io Mar 12 01:24:50.512622 containerd[1475]: time="2026-03-12T01:24:50.512219238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:24:50.549381 kubelet[2546]: E0312 01:24:50.549003 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:50.826025 systemd[1]: run-containerd-runc-k8s.io-e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171-runc.yU1glT.mount: Deactivated successfully. Mar 12 01:24:50.826562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e44abdd114b55d6e425aa6f741b2e4e66055053b39af7fd3a2ee7ca9a3f31171-rootfs.mount: Deactivated successfully. Mar 12 01:24:51.451383 containerd[1475]: time="2026-03-12T01:24:51.449235112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 01:24:52.551531 kubelet[2546]: E0312 01:24:52.549776 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:54.550018 kubelet[2546]: E0312 01:24:54.549791 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:56.550545 kubelet[2546]: E0312 01:24:56.548390 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:58.552675 kubelet[2546]: E0312 01:24:58.552229 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:24:58.616455 containerd[1475]: time="2026-03-12T01:24:58.616189272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:58.617824 containerd[1475]: time="2026-03-12T01:24:58.617740123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 01:24:58.620950 containerd[1475]: time="2026-03-12T01:24:58.620709106Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:58.626151 containerd[1475]: time="2026-03-12T01:24:58.626046326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:24:58.627750 containerd[1475]: time="2026-03-12T01:24:58.627547176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 7.178145398s" Mar 12 01:24:58.627750 containerd[1475]: time="2026-03-12T01:24:58.627602776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 01:24:58.638561 containerd[1475]: time="2026-03-12T01:24:58.638384071Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 01:24:58.691654 containerd[1475]: time="2026-03-12T01:24:58.691444288Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e\"" Mar 12 01:24:58.693956 containerd[1475]: time="2026-03-12T01:24:58.693201639Z" level=info msg="StartContainer for \"d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e\"" Mar 12 01:24:58.778741 systemd[1]: Started cri-containerd-d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e.scope - libcontainer container d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e. Mar 12 01:24:58.856561 containerd[1475]: time="2026-03-12T01:24:58.856109234Z" level=info msg="StartContainer for \"d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e\" returns successfully" Mar 12 01:25:00.100993 systemd[1]: cri-containerd-d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e.scope: Deactivated successfully. Mar 12 01:25:00.101472 systemd[1]: cri-containerd-d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e.scope: Consumed 1.435s CPU time. Mar 12 01:25:00.148727 kubelet[2546]: I0312 01:25:00.146698 2546 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 12 01:25:00.149004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e-rootfs.mount: Deactivated successfully. Mar 12 01:25:00.224573 containerd[1475]: time="2026-03-12T01:25:00.224492381Z" level=info msg="shim disconnected" id=d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e namespace=k8s.io Mar 12 01:25:00.225483 containerd[1475]: time="2026-03-12T01:25:00.225458029Z" level=warning msg="cleaning up after shim disconnected" id=d0aa60d789ecebdc8c0700acb451542976de408d2b40a9609fcdb7031373343e namespace=k8s.io Mar 12 01:25:00.225543 containerd[1475]: time="2026-03-12T01:25:00.225483455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:25:00.321462 systemd[1]: Created slice kubepods-burstable-podce9a7bc7_7141_49c7_bada_00125a5134ff.slice - libcontainer container kubepods-burstable-podce9a7bc7_7141_49c7_bada_00125a5134ff.slice. Mar 12 01:25:00.349088 systemd[1]: Created slice kubepods-burstable-pod86acc654_d8de_47c3_aef7_67c1f1950085.slice - libcontainer container kubepods-burstable-pod86acc654_d8de_47c3_aef7_67c1f1950085.slice. Mar 12 01:25:00.370184 systemd[1]: Created slice kubepods-besteffort-pod84316a8e_7adb_4ccf_b643_b729c328a05c.slice - libcontainer container kubepods-besteffort-pod84316a8e_7adb_4ccf_b643_b729c328a05c.slice. Mar 12 01:25:00.386413 systemd[1]: Created slice kubepods-besteffort-poddbe4f146_b6bc_4508_8482_6ee38f916cab.slice - libcontainer container kubepods-besteffort-poddbe4f146_b6bc_4508_8482_6ee38f916cab.slice. Mar 12 01:25:00.399186 systemd[1]: Created slice kubepods-besteffort-poda9774dfd_98ae_4f56_ac01_bd273c8754fb.slice - libcontainer container kubepods-besteffort-poda9774dfd_98ae_4f56_ac01_bd273c8754fb.slice. Mar 12 01:25:00.406858 kubelet[2546]: I0312 01:25:00.406811 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7474w\" (UniqueName: \"kubernetes.io/projected/86acc654-d8de-47c3-aef7-67c1f1950085-kube-api-access-7474w\") pod \"coredns-66bc5c9577-xx7s9\" (UID: \"86acc654-d8de-47c3-aef7-67c1f1950085\") " pod="kube-system/coredns-66bc5c9577-xx7s9" Mar 12 01:25:00.407496 kubelet[2546]: I0312 01:25:00.407169 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxg2\" (UniqueName: \"kubernetes.io/projected/ce9a7bc7-7141-49c7-bada-00125a5134ff-kube-api-access-8bxg2\") pod \"coredns-66bc5c9577-qkd8c\" (UID: \"ce9a7bc7-7141-49c7-bada-00125a5134ff\") " pod="kube-system/coredns-66bc5c9577-qkd8c" Mar 12 01:25:00.407496 kubelet[2546]: I0312 01:25:00.407216 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86acc654-d8de-47c3-aef7-67c1f1950085-config-volume\") pod \"coredns-66bc5c9577-xx7s9\" (UID: \"86acc654-d8de-47c3-aef7-67c1f1950085\") " pod="kube-system/coredns-66bc5c9577-xx7s9" Mar 12 01:25:00.407496 kubelet[2546]: I0312 01:25:00.407410 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9a7bc7-7141-49c7-bada-00125a5134ff-config-volume\") pod \"coredns-66bc5c9577-qkd8c\" (UID: \"ce9a7bc7-7141-49c7-bada-00125a5134ff\") " pod="kube-system/coredns-66bc5c9577-qkd8c" Mar 12 01:25:00.419385 systemd[1]: Created slice kubepods-besteffort-pod8eb3dd39_a138_4acf_8d82_63348c5ba938.slice - libcontainer container kubepods-besteffort-pod8eb3dd39_a138_4acf_8d82_63348c5ba938.slice. Mar 12 01:25:00.438879 systemd[1]: Created slice kubepods-besteffort-pod43f18370_3b65_47ef_902a_559a0936b656.slice - libcontainer container kubepods-besteffort-pod43f18370_3b65_47ef_902a_559a0936b656.slice. Mar 12 01:25:00.508163 kubelet[2546]: I0312 01:25:00.508036 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84316a8e-7adb-4ccf-b643-b729c328a05c-calico-apiserver-certs\") pod \"calico-apiserver-6cc8b6d44c-rfh26\" (UID: \"84316a8e-7adb-4ccf-b643-b729c328a05c\") " pod="calico-system/calico-apiserver-6cc8b6d44c-rfh26" Mar 12 01:25:00.508163 kubelet[2546]: I0312 01:25:00.508178 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t555g\" (UniqueName: \"kubernetes.io/projected/43f18370-3b65-47ef-902a-559a0936b656-kube-api-access-t555g\") pod \"calico-apiserver-6cc8b6d44c-pwhfw\" (UID: \"43f18370-3b65-47ef-902a-559a0936b656\") " pod="calico-system/calico-apiserver-6cc8b6d44c-pwhfw" Mar 12 01:25:00.508650 kubelet[2546]: I0312 01:25:00.508212 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb3dd39-a138-4acf-8d82-63348c5ba938-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-wqn8p\" (UID: \"8eb3dd39-a138-4acf-8d82-63348c5ba938\") " pod="calico-system/goldmane-cccfbd5cf-wqn8p" Mar 12 01:25:00.508650 kubelet[2546]: I0312 01:25:00.508401 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-backend-key-pair\") pod \"whisker-7b87b6b94d-6dx4w\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " pod="calico-system/whisker-7b87b6b94d-6dx4w" Mar 12 01:25:00.508650 kubelet[2546]: I0312 01:25:00.508439 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8eb3dd39-a138-4acf-8d82-63348c5ba938-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-wqn8p\" (UID: \"8eb3dd39-a138-4acf-8d82-63348c5ba938\") " pod="calico-system/goldmane-cccfbd5cf-wqn8p" Mar 12 01:25:00.509932 kubelet[2546]: I0312 01:25:00.509181 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-ca-bundle\") pod \"whisker-7b87b6b94d-6dx4w\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " pod="calico-system/whisker-7b87b6b94d-6dx4w" Mar 12 01:25:00.509932 kubelet[2546]: I0312 01:25:00.509436 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9774dfd-98ae-4f56-ac01-bd273c8754fb-tigera-ca-bundle\") pod \"calico-kube-controllers-84b6b7d8f4-gct62\" (UID: \"a9774dfd-98ae-4f56-ac01-bd273c8754fb\") " pod="calico-system/calico-kube-controllers-84b6b7d8f4-gct62" Mar 12 01:25:00.509932 kubelet[2546]: I0312 01:25:00.509497 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-nginx-config\") pod \"whisker-7b87b6b94d-6dx4w\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " pod="calico-system/whisker-7b87b6b94d-6dx4w" Mar 12 01:25:00.509932 kubelet[2546]: I0312 01:25:00.509641 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mzt\" (UniqueName: \"kubernetes.io/projected/84316a8e-7adb-4ccf-b643-b729c328a05c-kube-api-access-f5mzt\") pod \"calico-apiserver-6cc8b6d44c-rfh26\" (UID: \"84316a8e-7adb-4ccf-b643-b729c328a05c\") " pod="calico-system/calico-apiserver-6cc8b6d44c-rfh26" Mar 12 01:25:00.509932 kubelet[2546]: I0312 01:25:00.509679 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb3dd39-a138-4acf-8d82-63348c5ba938-config\") pod \"goldmane-cccfbd5cf-wqn8p\" (UID: \"8eb3dd39-a138-4acf-8d82-63348c5ba938\") " pod="calico-system/goldmane-cccfbd5cf-wqn8p" Mar 12 01:25:00.510411 kubelet[2546]: I0312 01:25:00.509709 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zhj\" (UniqueName: \"kubernetes.io/projected/8eb3dd39-a138-4acf-8d82-63348c5ba938-kube-api-access-r2zhj\") pod \"goldmane-cccfbd5cf-wqn8p\" (UID: \"8eb3dd39-a138-4acf-8d82-63348c5ba938\") " pod="calico-system/goldmane-cccfbd5cf-wqn8p" Mar 12 01:25:00.510411 kubelet[2546]: I0312 01:25:00.509773 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrtr\" (UniqueName: \"kubernetes.io/projected/dbe4f146-b6bc-4508-8482-6ee38f916cab-kube-api-access-2lrtr\") pod \"whisker-7b87b6b94d-6dx4w\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " pod="calico-system/whisker-7b87b6b94d-6dx4w" Mar 12 01:25:00.510411 kubelet[2546]: I0312 01:25:00.509806 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgtw\" (UniqueName: \"kubernetes.io/projected/a9774dfd-98ae-4f56-ac01-bd273c8754fb-kube-api-access-6lgtw\") pod \"calico-kube-controllers-84b6b7d8f4-gct62\" (UID: \"a9774dfd-98ae-4f56-ac01-bd273c8754fb\") " pod="calico-system/calico-kube-controllers-84b6b7d8f4-gct62" Mar 12 01:25:00.510411 kubelet[2546]: I0312 01:25:00.509839 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43f18370-3b65-47ef-902a-559a0936b656-calico-apiserver-certs\") pod \"calico-apiserver-6cc8b6d44c-pwhfw\" (UID: \"43f18370-3b65-47ef-902a-559a0936b656\") " pod="calico-system/calico-apiserver-6cc8b6d44c-pwhfw" Mar 12 01:25:00.575805 systemd[1]: Created slice kubepods-besteffort-poded55f459_74a9_4d53_811f_2b6098967bb3.slice - libcontainer container kubepods-besteffort-poded55f459_74a9_4d53_811f_2b6098967bb3.slice. Mar 12 01:25:00.594146 containerd[1475]: time="2026-03-12T01:25:00.593650489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5nns,Uid:ed55f459-74a9-4d53-811f-2b6098967bb3,Namespace:calico-system,Attempt:0,}" Mar 12 01:25:00.597551 containerd[1475]: time="2026-03-12T01:25:00.597100380Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 01:25:00.665553 kubelet[2546]: E0312 01:25:00.663596 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:00.666515 containerd[1475]: time="2026-03-12T01:25:00.665795809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qkd8c,Uid:ce9a7bc7-7141-49c7-bada-00125a5134ff,Namespace:kube-system,Attempt:0,}" Mar 12 01:25:00.681947 kubelet[2546]: E0312 01:25:00.681577 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:00.683649 containerd[1475]: time="2026-03-12T01:25:00.683401520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xx7s9,Uid:86acc654-d8de-47c3-aef7-67c1f1950085,Namespace:kube-system,Attempt:0,}" Mar 12 01:25:00.697990 containerd[1475]: time="2026-03-12T01:25:00.697158335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-rfh26,Uid:84316a8e-7adb-4ccf-b643-b729c328a05c,Namespace:calico-system,Attempt:0,}" Mar 12 01:25:00.707626 containerd[1475]: time="2026-03-12T01:25:00.707129223Z" level=info msg="CreateContainer within sandbox \"c2e912a88b8e8301a83d7cb6b15e1479693faaede96a63df73673886f35ab382\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"21350bb8d2808f0c40913d7aba625ec9281b0541170038cb57858d03120c8f3c\"" Mar 12 01:25:00.711146 containerd[1475]: time="2026-03-12T01:25:00.708669038Z" level=info msg="StartContainer for \"21350bb8d2808f0c40913d7aba625ec9281b0541170038cb57858d03120c8f3c\"" Mar 12 01:25:00.714896 containerd[1475]: time="2026-03-12T01:25:00.714783410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b87b6b94d-6dx4w,Uid:dbe4f146-b6bc-4508-8482-6ee38f916cab,Namespace:calico-system,Attempt:0,}" Mar 12 01:25:00.736775 containerd[1475]: time="2026-03-12T01:25:00.733774485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b6b7d8f4-gct62,Uid:a9774dfd-98ae-4f56-ac01-bd273c8754fb,Namespace:calico-system,Attempt:0,}" Mar 12 01:25:00.751096 containerd[1475]: time="2026-03-12T01:25:00.750906144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wqn8p,Uid:8eb3dd39-a138-4acf-8d82-63348c5ba938,Namespace:calico-system,Attempt:0,}" Mar 12 01:25:00.757933 containerd[1475]: time="2026-03-12T01:25:00.757795880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-pwhfw,Uid:43f18370-3b65-47ef-902a-559a0936b656,Namespace:calico-system,Attempt:0,}" Mar 12 01:25:00.890685 systemd[1]: Started cri-containerd-21350bb8d2808f0c40913d7aba625ec9281b0541170038cb57858d03120c8f3c.scope - libcontainer container 21350bb8d2808f0c40913d7aba625ec9281b0541170038cb57858d03120c8f3c. Mar 12 01:25:01.243573 containerd[1475]: time="2026-03-12T01:25:01.242651281Z" level=info msg="StartContainer for \"21350bb8d2808f0c40913d7aba625ec9281b0541170038cb57858d03120c8f3c\" returns successfully" Mar 12 01:25:01.457439 containerd[1475]: time="2026-03-12T01:25:01.454213838Z" level=error msg="Failed to destroy network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.475397 containerd[1475]: time="2026-03-12T01:25:01.463570146Z" level=error msg="encountered an error cleaning up failed sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.475397 containerd[1475]: time="2026-03-12T01:25:01.465476680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5nns,Uid:ed55f459-74a9-4d53-811f-2b6098967bb3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.473785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22-shm.mount: Deactivated successfully. Mar 12 01:25:01.554047 kubelet[2546]: E0312 01:25:01.531729 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.554047 kubelet[2546]: E0312 01:25:01.531887 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5nns" Mar 12 01:25:01.554047 kubelet[2546]: E0312 01:25:01.531931 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5nns" Mar 12 01:25:01.555742 kubelet[2546]: E0312 01:25:01.532064 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s5nns_calico-system(ed55f459-74a9-4d53-811f-2b6098967bb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s5nns_calico-system(ed55f459-74a9-4d53-811f-2b6098967bb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:25:01.674464 kubelet[2546]: I0312 01:25:01.648232 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:25:01.725824 kubelet[2546]: I0312 01:25:01.725654 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-66fjw" podStartSLOduration=3.675863063 podStartE2EDuration="32.725628108s" podCreationTimestamp="2026-03-12 01:24:29 +0000 UTC" firstStartedPulling="2026-03-12 01:24:29.578994153 +0000 UTC m=+27.193357031" lastFinishedPulling="2026-03-12 01:24:58.628759197 +0000 UTC m=+56.243122076" observedRunningTime="2026-03-12 01:25:01.70976869 +0000 UTC m=+59.324131639" watchObservedRunningTime="2026-03-12 01:25:01.725628108 +0000 UTC m=+59.339990987" Mar 12 01:25:01.748731 containerd[1475]: time="2026-03-12T01:25:01.743158117Z" level=info msg="StopPodSandbox for \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\"" Mar 12 01:25:01.748731 containerd[1475]: time="2026-03-12T01:25:01.745650794Z" level=info msg="Ensure that sandbox 0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22 in task-service has been cleanup successfully" Mar 12 01:25:01.811709 containerd[1475]: time="2026-03-12T01:25:01.810989137Z" level=error msg="Failed to destroy network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.823813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa-shm.mount: Deactivated successfully. Mar 12 01:25:01.837353 containerd[1475]: time="2026-03-12T01:25:01.833855834Z" level=error msg="encountered an error cleaning up failed sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.837353 containerd[1475]: time="2026-03-12T01:25:01.834026839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xx7s9,Uid:86acc654-d8de-47c3-aef7-67c1f1950085,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.837541 kubelet[2546]: E0312 01:25:01.834583 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.837541 kubelet[2546]: E0312 01:25:01.834676 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xx7s9" Mar 12 01:25:01.837541 kubelet[2546]: E0312 01:25:01.834713 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xx7s9" Mar 12 01:25:01.837683 kubelet[2546]: E0312 01:25:01.834798 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xx7s9_kube-system(86acc654-d8de-47c3-aef7-67c1f1950085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xx7s9_kube-system(86acc654-d8de-47c3-aef7-67c1f1950085)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xx7s9" podUID="86acc654-d8de-47c3-aef7-67c1f1950085" Mar 12 01:25:01.871472 containerd[1475]: time="2026-03-12T01:25:01.870397297Z" level=error msg="Failed to destroy network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.891416 containerd[1475]: time="2026-03-12T01:25:01.890028941Z" level=error msg="encountered an error cleaning up failed sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.891416 containerd[1475]: time="2026-03-12T01:25:01.890139890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b6b7d8f4-gct62,Uid:a9774dfd-98ae-4f56-ac01-bd273c8754fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.894151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31-shm.mount: Deactivated successfully. Mar 12 01:25:01.928321 kubelet[2546]: E0312 01:25:01.928098 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.932457 kubelet[2546]: E0312 01:25:01.929436 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b6b7d8f4-gct62" Mar 12 01:25:01.932457 kubelet[2546]: E0312 01:25:01.929528 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84b6b7d8f4-gct62" Mar 12 01:25:01.932457 kubelet[2546]: E0312 01:25:01.929624 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84b6b7d8f4-gct62_calico-system(a9774dfd-98ae-4f56-ac01-bd273c8754fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84b6b7d8f4-gct62_calico-system(a9774dfd-98ae-4f56-ac01-bd273c8754fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84b6b7d8f4-gct62" podUID="a9774dfd-98ae-4f56-ac01-bd273c8754fb" Mar 12 01:25:01.936522 containerd[1475]: time="2026-03-12T01:25:01.936371674Z" level=error msg="Failed to destroy network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.961382 containerd[1475]: time="2026-03-12T01:25:01.959347796Z" level=error msg="encountered an error cleaning up failed sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.961382 containerd[1475]: time="2026-03-12T01:25:01.959462212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qkd8c,Uid:ce9a7bc7-7141-49c7-bada-00125a5134ff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.961617 kubelet[2546]: E0312 01:25:01.959789 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:01.961617 kubelet[2546]: E0312 01:25:01.959878 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qkd8c" Mar 12 01:25:01.961617 kubelet[2546]: E0312 01:25:01.959919 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qkd8c" Mar 12 01:25:01.961831 kubelet[2546]: E0312 01:25:01.960008 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qkd8c_kube-system(ce9a7bc7-7141-49c7-bada-00125a5134ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qkd8c_kube-system(ce9a7bc7-7141-49c7-bada-00125a5134ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qkd8c" podUID="ce9a7bc7-7141-49c7-bada-00125a5134ff" Mar 12 01:25:02.006148 containerd[1475]: time="2026-03-12T01:25:02.006074309Z" level=error msg="Failed to destroy network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.016654 containerd[1475]: time="2026-03-12T01:25:02.016507730Z" level=error msg="Failed to destroy network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.019930 containerd[1475]: time="2026-03-12T01:25:02.019562963Z" level=error msg="encountered an error cleaning up failed sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.024972 containerd[1475]: time="2026-03-12T01:25:02.024078419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-rfh26,Uid:84316a8e-7adb-4ccf-b643-b729c328a05c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.032901 kubelet[2546]: E0312 01:25:02.032839 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.033145 kubelet[2546]: E0312 01:25:02.032928 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc8b6d44c-rfh26" Mar 12 01:25:02.033145 kubelet[2546]: E0312 01:25:02.032962 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc8b6d44c-rfh26" Mar 12 01:25:02.033145 kubelet[2546]: E0312 01:25:02.033044 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc8b6d44c-rfh26_calico-system(84316a8e-7adb-4ccf-b643-b729c328a05c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc8b6d44c-rfh26_calico-system(84316a8e-7adb-4ccf-b643-b729c328a05c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cc8b6d44c-rfh26" podUID="84316a8e-7adb-4ccf-b643-b729c328a05c" Mar 12 01:25:02.033831 containerd[1475]: time="2026-03-12T01:25:02.030036527Z" level=error msg="encountered an error cleaning up failed sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.038619 containerd[1475]: time="2026-03-12T01:25:02.034882987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b87b6b94d-6dx4w,Uid:dbe4f146-b6bc-4508-8482-6ee38f916cab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.038889 kubelet[2546]: E0312 01:25:02.035425 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.038889 kubelet[2546]: E0312 01:25:02.035493 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b87b6b94d-6dx4w" Mar 12 01:25:02.038889 kubelet[2546]: E0312 01:25:02.035525 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b87b6b94d-6dx4w" Mar 12 01:25:02.039069 kubelet[2546]: E0312 01:25:02.035597 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b87b6b94d-6dx4w_calico-system(dbe4f146-b6bc-4508-8482-6ee38f916cab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b87b6b94d-6dx4w_calico-system(dbe4f146-b6bc-4508-8482-6ee38f916cab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b87b6b94d-6dx4w" podUID="dbe4f146-b6bc-4508-8482-6ee38f916cab" Mar 12 01:25:02.046678 containerd[1475]: time="2026-03-12T01:25:02.046432984Z" level=error msg="StopPodSandbox for \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\" failed" error="failed to destroy network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.051840 kubelet[2546]: E0312 01:25:02.048163 2546 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:25:02.051968 kubelet[2546]: E0312 01:25:02.051409 2546 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22"} Mar 12 01:25:02.052669 kubelet[2546]: E0312 01:25:02.052181 2546 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed55f459-74a9-4d53-811f-2b6098967bb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 12 01:25:02.053057 kubelet[2546]: E0312 01:25:02.052744 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed55f459-74a9-4d53-811f-2b6098967bb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5nns" podUID="ed55f459-74a9-4d53-811f-2b6098967bb3" Mar 12 01:25:02.094864 containerd[1475]: time="2026-03-12T01:25:02.092722058Z" level=error msg="Failed to destroy network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.102343 containerd[1475]: time="2026-03-12T01:25:02.097179800Z" level=error msg="encountered an error cleaning up failed sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.102343 containerd[1475]: time="2026-03-12T01:25:02.098379265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wqn8p,Uid:8eb3dd39-a138-4acf-8d82-63348c5ba938,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.102612 kubelet[2546]: E0312 01:25:02.099408 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.102612 kubelet[2546]: E0312 01:25:02.099500 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-wqn8p" Mar 12 01:25:02.102612 kubelet[2546]: E0312 01:25:02.099534 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-wqn8p" Mar 12 01:25:02.102787 kubelet[2546]: E0312 01:25:02.099618 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-wqn8p_calico-system(8eb3dd39-a138-4acf-8d82-63348c5ba938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-wqn8p_calico-system(8eb3dd39-a138-4acf-8d82-63348c5ba938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-wqn8p" podUID="8eb3dd39-a138-4acf-8d82-63348c5ba938" Mar 12 01:25:02.166808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f-shm.mount: Deactivated successfully. Mar 12 01:25:02.167006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c-shm.mount: Deactivated successfully. Mar 12 01:25:02.167132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4-shm.mount: Deactivated successfully. Mar 12 01:25:02.167375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3-shm.mount: Deactivated successfully. Mar 12 01:25:02.183403 containerd[1475]: time="2026-03-12T01:25:02.181449921Z" level=error msg="Failed to destroy network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.186418 containerd[1475]: time="2026-03-12T01:25:02.184889202Z" level=error msg="encountered an error cleaning up failed sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.186418 containerd[1475]: time="2026-03-12T01:25:02.184962304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-pwhfw,Uid:43f18370-3b65-47ef-902a-559a0936b656,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.186682 kubelet[2546]: E0312 01:25:02.185672 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:25:02.186682 kubelet[2546]: E0312 01:25:02.185749 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc8b6d44c-pwhfw" Mar 12 01:25:02.186682 kubelet[2546]: E0312 01:25:02.185778 2546 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc8b6d44c-pwhfw" Mar 12 01:25:02.186834 kubelet[2546]: E0312 01:25:02.185893 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc8b6d44c-pwhfw_calico-system(43f18370-3b65-47ef-902a-559a0936b656)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc8b6d44c-pwhfw_calico-system(43f18370-3b65-47ef-902a-559a0936b656)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cc8b6d44c-pwhfw" podUID="43f18370-3b65-47ef-902a-559a0936b656" Mar 12 01:25:02.189914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4-shm.mount: Deactivated successfully. Mar 12 01:25:02.684451 kubelet[2546]: I0312 01:25:02.684080 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:25:02.695549 containerd[1475]: time="2026-03-12T01:25:02.685641354Z" level=info msg="StopPodSandbox for \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\"" Mar 12 01:25:02.695549 containerd[1475]: time="2026-03-12T01:25:02.685908874Z" level=info msg="Ensure that sandbox e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4 in task-service has been cleanup successfully" Mar 12 01:25:02.728996 kubelet[2546]: I0312 01:25:02.727513 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:25:02.729342 containerd[1475]: time="2026-03-12T01:25:02.728540149Z" level=info msg="StopPodSandbox for \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\"" Mar 12 01:25:02.729342 containerd[1475]: time="2026-03-12T01:25:02.728896258Z" level=info msg="Ensure that sandbox a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f in task-service has been cleanup successfully" Mar 12 01:25:02.739373 kubelet[2546]: I0312 01:25:02.732414 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:02.739552 containerd[1475]: time="2026-03-12T01:25:02.733015172Z" level=info msg="StopPodSandbox for \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\"" Mar 12 01:25:02.746556 kubelet[2546]: I0312 01:25:02.745958 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:25:02.752408 containerd[1475]: time="2026-03-12T01:25:02.733176211Z" level=info msg="Ensure that sandbox 916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c in task-service has been cleanup successfully" Mar 12 01:25:02.753537 containerd[1475]: time="2026-03-12T01:25:02.753425052Z" level=info msg="StopPodSandbox for \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\"" Mar 12 01:25:02.754106 containerd[1475]: time="2026-03-12T01:25:02.754011546Z" level=info msg="Ensure that sandbox 851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31 in task-service has been cleanup successfully" Mar 12 01:25:02.810109 kubelet[2546]: I0312 01:25:02.809348 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:25:02.811224 containerd[1475]: time="2026-03-12T01:25:02.811122285Z" level=info msg="StopPodSandbox for \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\"" Mar 12 01:25:02.816534 containerd[1475]: time="2026-03-12T01:25:02.816484482Z" level=info msg="Ensure that sandbox 5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4 in task-service has been cleanup successfully" Mar 12 01:25:02.835164 kubelet[2546]: I0312 01:25:02.833741 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:25:02.840728 containerd[1475]: time="2026-03-12T01:25:02.839856512Z" level=info msg="StopPodSandbox for \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\"" Mar 12 01:25:02.854154 kubelet[2546]: I0312 01:25:02.854121 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:25:02.856865 containerd[1475]: time="2026-03-12T01:25:02.855349007Z" level=info msg="Ensure that sandbox 9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa in task-service has been cleanup successfully" Mar 12 01:25:02.864248 containerd[1475]: time="2026-03-12T01:25:02.861939894Z" level=info msg="StopPodSandbox for \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\"" Mar 12 01:25:02.864248 containerd[1475]: time="2026-03-12T01:25:02.862392046Z" level=info msg="Ensure that sandbox 8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3 in task-service has been cleanup successfully" Mar 12 01:25:03.132139 systemd[1]: run-containerd-runc-k8s.io-21350bb8d2808f0c40913d7aba625ec9281b0541170038cb57858d03120c8f3c-runc.FDKvUH.mount: Deactivated successfully. Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.261 [INFO][3756] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.262 [INFO][3756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" iface="eth0" netns="/var/run/netns/cni-93c4629d-b466-ece9-afc9-79c97f1170b2" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.262 [INFO][3756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" iface="eth0" netns="/var/run/netns/cni-93c4629d-b466-ece9-afc9-79c97f1170b2" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.264 [INFO][3756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" iface="eth0" netns="/var/run/netns/cni-93c4629d-b466-ece9-afc9-79c97f1170b2" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.264 [INFO][3756] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.264 [INFO][3756] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.572 [INFO][3865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.580 [INFO][3865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.581 [INFO][3865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.639 [WARNING][3865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.639 [INFO][3865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.662 [INFO][3865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:03.774872 containerd[1475]: 2026-03-12 01:25:03.714 [INFO][3756] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:25:03.784213 containerd[1475]: time="2026-03-12T01:25:03.780372294Z" level=info msg="TearDown network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\" successfully" Mar 12 01:25:03.784213 containerd[1475]: time="2026-03-12T01:25:03.780424098Z" level=info msg="StopPodSandbox for \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\" returns successfully" Mar 12 01:25:03.782742 systemd[1]: run-netns-cni\x2d93c4629d\x2db466\x2dece9\x2dafc9\x2d79c97f1170b2.mount: Deactivated successfully. Mar 12 01:25:03.808730 containerd[1475]: time="2026-03-12T01:25:03.808603844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-pwhfw,Uid:43f18370-3b65-47ef-902a-559a0936b656,Namespace:calico-system,Attempt:1,}" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.287 [INFO][3768] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.290 [INFO][3768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" iface="eth0" netns="/var/run/netns/cni-84a9ee54-3749-d448-5d52-f19abdcd6007" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.290 [INFO][3768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" iface="eth0" netns="/var/run/netns/cni-84a9ee54-3749-d448-5d52-f19abdcd6007" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.295 [INFO][3768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" iface="eth0" netns="/var/run/netns/cni-84a9ee54-3749-d448-5d52-f19abdcd6007" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.295 [INFO][3768] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.295 [INFO][3768] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.643 [INFO][3875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.644 [INFO][3875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.662 [INFO][3875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.742 [WARNING][3875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.743 [INFO][3875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.753 [INFO][3875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:03.814787 containerd[1475]: 2026-03-12 01:25:03.794 [INFO][3768] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:25:03.816046 containerd[1475]: time="2026-03-12T01:25:03.815878499Z" level=info msg="TearDown network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\" successfully" Mar 12 01:25:03.816046 containerd[1475]: time="2026-03-12T01:25:03.815926366Z" level=info msg="StopPodSandbox for \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\" returns successfully" Mar 12 01:25:03.829056 systemd[1]: run-netns-cni\x2d84a9ee54\x2d3749\x2dd448\x2d5d52\x2df19abdcd6007.mount: Deactivated successfully. Mar 12 01:25:03.836204 containerd[1475]: time="2026-03-12T01:25:03.836023840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b6b7d8f4-gct62,Uid:a9774dfd-98ae-4f56-ac01-bd273c8754fb,Namespace:calico-system,Attempt:1,}" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.286 [INFO][3803] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.289 [INFO][3803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" iface="eth0" netns="/var/run/netns/cni-001918ac-06c7-cf92-c9ab-f782f1acb093" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.290 [INFO][3803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" iface="eth0" netns="/var/run/netns/cni-001918ac-06c7-cf92-c9ab-f782f1acb093" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.299 [INFO][3803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" iface="eth0" netns="/var/run/netns/cni-001918ac-06c7-cf92-c9ab-f782f1acb093" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.301 [INFO][3803] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.302 [INFO][3803] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.838 [INFO][3876] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.853 [INFO][3876] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.853 [INFO][3876] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.880 [WARNING][3876] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.880 [INFO][3876] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.895 [INFO][3876] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:03.932355 containerd[1475]: 2026-03-12 01:25:03.918 [INFO][3803] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:25:03.932355 containerd[1475]: time="2026-03-12T01:25:03.929232389Z" level=info msg="TearDown network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\" successfully" Mar 12 01:25:03.932355 containerd[1475]: time="2026-03-12T01:25:03.929400230Z" level=info msg="StopPodSandbox for \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\" returns successfully" Mar 12 01:25:03.944444 kubelet[2546]: E0312 01:25:03.942039 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:03.943676 systemd[1]: run-netns-cni\x2d001918ac\x2d06c7\x2dcf92\x2dc9ab\x2df782f1acb093.mount: Deactivated successfully. Mar 12 01:25:03.955771 containerd[1475]: time="2026-03-12T01:25:03.951538672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xx7s9,Uid:86acc654-d8de-47c3-aef7-67c1f1950085,Namespace:kube-system,Attempt:1,}" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.376 [INFO][3750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.380 [INFO][3750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" iface="eth0" netns="/var/run/netns/cni-1f867b96-a7e3-6fcd-2617-a890b9775292" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.382 [INFO][3750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" iface="eth0" netns="/var/run/netns/cni-1f867b96-a7e3-6fcd-2617-a890b9775292" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.384 [INFO][3750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" iface="eth0" netns="/var/run/netns/cni-1f867b96-a7e3-6fcd-2617-a890b9775292" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.389 [INFO][3750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.389 [INFO][3750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.897 [INFO][3894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.901 [INFO][3894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.901 [INFO][3894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.923 [WARNING][3894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.925 [INFO][3894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.934 [INFO][3894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:03.997844 containerd[1475]: 2026-03-12 01:25:03.986 [INFO][3750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:25:04.001242 containerd[1475]: time="2026-03-12T01:25:03.999247427Z" level=info msg="TearDown network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\" successfully" Mar 12 01:25:04.001242 containerd[1475]: time="2026-03-12T01:25:03.999393920Z" level=info msg="StopPodSandbox for \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\" returns successfully" Mar 12 01:25:04.011372 containerd[1475]: time="2026-03-12T01:25:04.011201907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wqn8p,Uid:8eb3dd39-a138-4acf-8d82-63348c5ba938,Namespace:calico-system,Attempt:1,}" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.259 [INFO][3757] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.283 [INFO][3757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" iface="eth0" netns="/var/run/netns/cni-6796b716-67f1-884a-8cfc-f6694a5b11f6" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.290 [INFO][3757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" iface="eth0" netns="/var/run/netns/cni-6796b716-67f1-884a-8cfc-f6694a5b11f6" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.296 [INFO][3757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" iface="eth0" netns="/var/run/netns/cni-6796b716-67f1-884a-8cfc-f6694a5b11f6" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.296 [INFO][3757] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.296 [INFO][3757] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.876 [INFO][3870] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.880 [INFO][3870] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:03.953 [INFO][3870] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:04.000 [WARNING][3870] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:04.001 [INFO][3870] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:04.032 [INFO][3870] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:04.105436 containerd[1475]: 2026-03-12 01:25:04.054 [INFO][3757] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:04.105436 containerd[1475]: time="2026-03-12T01:25:04.103016211Z" level=info msg="TearDown network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" successfully" Mar 12 01:25:04.105436 containerd[1475]: time="2026-03-12T01:25:04.103059369Z" level=info msg="StopPodSandbox for \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" returns successfully" Mar 12 01:25:04.114543 containerd[1475]: time="2026-03-12T01:25:04.114432534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b87b6b94d-6dx4w,Uid:dbe4f146-b6bc-4508-8482-6ee38f916cab,Namespace:calico-system,Attempt:1,}" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.469 [INFO][3823] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.469 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" iface="eth0" netns="/var/run/netns/cni-4a4c224a-4992-c217-4823-365d49735ba2" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.473 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" iface="eth0" netns="/var/run/netns/cni-4a4c224a-4992-c217-4823-365d49735ba2" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.481 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" iface="eth0" netns="/var/run/netns/cni-4a4c224a-4992-c217-4823-365d49735ba2" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.482 [INFO][3823] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.483 [INFO][3823] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.989 [INFO][3910] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:03.990 [INFO][3910] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:04.033 [INFO][3910] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:04.124 [WARNING][3910] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:04.124 [INFO][3910] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:04.146 [INFO][3910] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:04.177195 containerd[1475]: 2026-03-12 01:25:04.156 [INFO][3823] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:25:04.178749 containerd[1475]: time="2026-03-12T01:25:04.177548652Z" level=info msg="TearDown network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\" successfully" Mar 12 01:25:04.178749 containerd[1475]: time="2026-03-12T01:25:04.177654292Z" level=info msg="StopPodSandbox for \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\" returns successfully" Mar 12 01:25:04.186447 kubelet[2546]: E0312 01:25:04.184244 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:04.189725 containerd[1475]: time="2026-03-12T01:25:04.189638791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qkd8c,Uid:ce9a7bc7-7141-49c7-bada-00125a5134ff,Namespace:kube-system,Attempt:1,}" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:03.392 [INFO][3802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:03.392 [INFO][3802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" iface="eth0" netns="/var/run/netns/cni-996e68ec-4c25-d453-2c3e-1ceb5b0dfc86" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:03.398 [INFO][3802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" iface="eth0" netns="/var/run/netns/cni-996e68ec-4c25-d453-2c3e-1ceb5b0dfc86" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:03.407 [INFO][3802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" iface="eth0" netns="/var/run/netns/cni-996e68ec-4c25-d453-2c3e-1ceb5b0dfc86" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:03.407 [INFO][3802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:03.407 [INFO][3802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:04.008 [INFO][3899] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:04.011 [INFO][3899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:04.148 [INFO][3899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:04.165 [WARNING][3899] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:04.165 [INFO][3899] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:04.185 [INFO][3899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:04.248794 containerd[1475]: 2026-03-12 01:25:04.227 [INFO][3802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:25:04.249802 containerd[1475]: time="2026-03-12T01:25:04.249757003Z" level=info msg="TearDown network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\" successfully" Mar 12 01:25:04.250703 containerd[1475]: time="2026-03-12T01:25:04.250540044Z" level=info msg="StopPodSandbox for \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\" returns successfully" Mar 12 01:25:04.262129 containerd[1475]: time="2026-03-12T01:25:04.261796563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-rfh26,Uid:84316a8e-7adb-4ccf-b643-b729c328a05c,Namespace:calico-system,Attempt:1,}" Mar 12 01:25:04.799676 systemd[1]: run-netns-cni\x2d1f867b96\x2da7e3\x2d6fcd\x2d2617\x2da890b9775292.mount: Deactivated successfully. Mar 12 01:25:04.802848 systemd[1]: run-netns-cni\x2d6796b716\x2d67f1\x2d884a\x2d8cfc\x2df6694a5b11f6.mount: Deactivated successfully. Mar 12 01:25:04.803006 systemd[1]: run-netns-cni\x2d996e68ec\x2d4c25\x2dd453\x2d2c3e\x2d1ceb5b0dfc86.mount: Deactivated successfully. Mar 12 01:25:04.803228 systemd[1]: run-netns-cni\x2d4a4c224a\x2d4992\x2dc217\x2d4823\x2d365d49735ba2.mount: Deactivated successfully. Mar 12 01:25:04.827152 systemd-networkd[1408]: calideb95b4acce: Link UP Mar 12 01:25:04.827988 systemd-networkd[1408]: calideb95b4acce: Gained carrier Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.198 [ERROR][3947] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.270 [INFO][3947] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0 calico-kube-controllers-84b6b7d8f4- calico-system a9774dfd-98ae-4f56-ac01-bd273c8754fb 989 0 2026-03-12 01:24:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84b6b7d8f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84b6b7d8f4-gct62 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calideb95b4acce [] [] }} ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.270 [INFO][3947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.514 [INFO][4021] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" HandleID="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.553 [INFO][4021] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" HandleID="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048af30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84b6b7d8f4-gct62", "timestamp":"2026-03-12 01:25:04.514701037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002102c0)} Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.553 [INFO][4021] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.553 [INFO][4021] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.553 [INFO][4021] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.569 [INFO][4021] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.596 [INFO][4021] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.616 [INFO][4021] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.632 [INFO][4021] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.653 [INFO][4021] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.653 [INFO][4021] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.667 [INFO][4021] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1 Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.680 [INFO][4021] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.725 [INFO][4021] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.726 [INFO][4021] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" host="localhost" Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.727 [INFO][4021] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:04.875568 containerd[1475]: 2026-03-12 01:25:04.727 [INFO][4021] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" HandleID="k8s-pod-network.69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:04.881703 containerd[1475]: 2026-03-12 01:25:04.742 [INFO][3947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0", GenerateName:"calico-kube-controllers-84b6b7d8f4-", Namespace:"calico-system", SelfLink:"", UID:"a9774dfd-98ae-4f56-ac01-bd273c8754fb", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b6b7d8f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84b6b7d8f4-gct62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calideb95b4acce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:04.881703 containerd[1475]: 2026-03-12 01:25:04.745 [INFO][3947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:04.881703 containerd[1475]: 2026-03-12 01:25:04.747 [INFO][3947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calideb95b4acce ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:04.881703 containerd[1475]: 2026-03-12 01:25:04.830 [INFO][3947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:04.881703 containerd[1475]: 2026-03-12 01:25:04.831 [INFO][3947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0", GenerateName:"calico-kube-controllers-84b6b7d8f4-", Namespace:"calico-system", SelfLink:"", UID:"a9774dfd-98ae-4f56-ac01-bd273c8754fb", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b6b7d8f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1", Pod:"calico-kube-controllers-84b6b7d8f4-gct62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calideb95b4acce", MAC:"ba:34:7f:7b:51:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:04.881703 containerd[1475]: 2026-03-12 01:25:04.870 [INFO][3947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1" Namespace="calico-system" Pod="calico-kube-controllers-84b6b7d8f4-gct62" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:25:04.920425 systemd-networkd[1408]: cali4525debb3a0: Link UP Mar 12 01:25:04.923663 systemd-networkd[1408]: cali4525debb3a0: Gained carrier Mar 12 01:25:04.962608 containerd[1475]: time="2026-03-12T01:25:04.962391552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:04.963097 containerd[1475]: time="2026-03-12T01:25:04.962833978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:04.963097 containerd[1475]: time="2026-03-12T01:25:04.962907641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:04.963948 containerd[1475]: time="2026-03-12T01:25:04.963152852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.280 [ERROR][3958] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.384 [INFO][3958] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xx7s9-eth0 coredns-66bc5c9577- kube-system 86acc654-d8de-47c3-aef7-67c1f1950085 990 0 2026-03-12 01:24:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xx7s9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4525debb3a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.387 [INFO][3958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.621 [INFO][4058] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" HandleID="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.645 [INFO][4058] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" HandleID="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf1f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xx7s9", "timestamp":"2026-03-12 01:25:04.621592051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00028e580)} Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.647 [INFO][4058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.727 [INFO][4058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.727 [INFO][4058] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.739 [INFO][4058] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.761 [INFO][4058] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.799 [INFO][4058] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.820 [INFO][4058] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.831 [INFO][4058] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.832 [INFO][4058] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.849 [INFO][4058] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0 Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.869 [INFO][4058] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.898 [INFO][4058] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.898 [INFO][4058] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" host="localhost" Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.902 [INFO][4058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:05.014493 containerd[1475]: 2026-03-12 01:25:04.903 [INFO][4058] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" HandleID="k8s-pod-network.17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:05.018605 containerd[1475]: 2026-03-12 01:25:04.912 [INFO][3958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xx7s9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86acc654-d8de-47c3-aef7-67c1f1950085", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xx7s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4525debb3a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.018605 containerd[1475]: 2026-03-12 01:25:04.913 [INFO][3958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:05.018605 containerd[1475]: 2026-03-12 01:25:04.913 [INFO][3958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4525debb3a0 ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:05.018605 containerd[1475]: 2026-03-12 01:25:04.926 [INFO][3958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:05.018605 containerd[1475]: 2026-03-12 01:25:04.929 [INFO][3958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xx7s9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86acc654-d8de-47c3-aef7-67c1f1950085", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0", Pod:"coredns-66bc5c9577-xx7s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4525debb3a0", MAC:"aa:b9:6c:2c:4b:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.018605 containerd[1475]: 2026-03-12 01:25:05.005 [INFO][3958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0" Namespace="kube-system" Pod="coredns-66bc5c9577-xx7s9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:25:05.059568 systemd[1]: Started cri-containerd-69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1.scope - libcontainer container 69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1. Mar 12 01:25:05.103501 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:05.115406 containerd[1475]: time="2026-03-12T01:25:05.115130007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:05.118932 containerd[1475]: time="2026-03-12T01:25:05.115382141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:05.118932 containerd[1475]: time="2026-03-12T01:25:05.115420611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.118932 containerd[1475]: time="2026-03-12T01:25:05.115820000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.144445 systemd-networkd[1408]: cali3e726c343b6: Link UP Mar 12 01:25:05.154228 systemd-networkd[1408]: cali3e726c343b6: Gained carrier Mar 12 01:25:05.198887 systemd[1]: Started cri-containerd-17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0.scope - libcontainer container 17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0. Mar 12 01:25:05.248799 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.325 [ERROR][3998] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.463 [INFO][3998] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0 whisker-7b87b6b94d- calico-system dbe4f146-b6bc-4508-8482-6ee38f916cab 992 0 2026-03-12 01:24:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b87b6b94d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b87b6b94d-6dx4w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3e726c343b6 [] [] }} ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.466 [INFO][3998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.627 [INFO][4068] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.666 [INFO][4068] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a2cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b87b6b94d-6dx4w", "timestamp":"2026-03-12 01:25:04.627944277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000369ce0)} Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.666 [INFO][4068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.898 [INFO][4068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.898 [INFO][4068] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.925 [INFO][4068] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:04.956 [INFO][4068] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.003 [INFO][4068] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.021 [INFO][4068] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.030 [INFO][4068] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.030 [INFO][4068] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.039 [INFO][4068] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680 Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.066 [INFO][4068] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.097 [INFO][4068] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.098 [INFO][4068] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" host="localhost" Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.098 [INFO][4068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:05.268689 containerd[1475]: 2026-03-12 01:25:05.099 [INFO][4068] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:05.270137 containerd[1475]: 2026-03-12 01:25:05.119 [INFO][3998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0", GenerateName:"whisker-7b87b6b94d-", Namespace:"calico-system", SelfLink:"", UID:"dbe4f146-b6bc-4508-8482-6ee38f916cab", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b87b6b94d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b87b6b94d-6dx4w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3e726c343b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.270137 containerd[1475]: 2026-03-12 01:25:05.119 [INFO][3998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:05.270137 containerd[1475]: 2026-03-12 01:25:05.120 [INFO][3998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e726c343b6 ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:05.270137 containerd[1475]: 2026-03-12 01:25:05.157 [INFO][3998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:05.270137 containerd[1475]: 2026-03-12 01:25:05.162 [INFO][3998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0", GenerateName:"whisker-7b87b6b94d-", Namespace:"calico-system", SelfLink:"", UID:"dbe4f146-b6bc-4508-8482-6ee38f916cab", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b87b6b94d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680", Pod:"whisker-7b87b6b94d-6dx4w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3e726c343b6", MAC:"7a:b3:02:06:09:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.270137 containerd[1475]: 2026-03-12 01:25:05.229 [INFO][3998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Namespace="calico-system" Pod="whisker-7b87b6b94d-6dx4w" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:05.315723 containerd[1475]: time="2026-03-12T01:25:05.312914648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84b6b7d8f4-gct62,Uid:a9774dfd-98ae-4f56-ac01-bd273c8754fb,Namespace:calico-system,Attempt:1,} returns sandbox id \"69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1\"" Mar 12 01:25:05.331355 containerd[1475]: time="2026-03-12T01:25:05.329505404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 01:25:05.447876 systemd-networkd[1408]: calib45a42650f8: Link UP Mar 12 01:25:05.459869 systemd-networkd[1408]: calib45a42650f8: Gained carrier Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:04.210 [ERROR][3969] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:04.266 [INFO][3969] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0 goldmane-cccfbd5cf- calico-system 8eb3dd39-a138-4acf-8d82-63348c5ba938 994 0 2026-03-12 01:24:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-wqn8p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib45a42650f8 [] [] }} ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:04.266 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:04.646 [INFO][4019] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" HandleID="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:04.682 [INFO][4019] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" HandleID="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-wqn8p", "timestamp":"2026-03-12 01:25:04.64660276 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000238000)} Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:04.682 [INFO][4019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.099 [INFO][4019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.101 [INFO][4019] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.110 [INFO][4019] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.143 [INFO][4019] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.164 [INFO][4019] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.197 [INFO][4019] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.232 [INFO][4019] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.232 [INFO][4019] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.255 [INFO][4019] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.275 [INFO][4019] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.303 [INFO][4019] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.311 [INFO][4019] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" host="localhost" Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.318 [INFO][4019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:05.560142 containerd[1475]: 2026-03-12 01:25:05.319 [INFO][4019] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" HandleID="k8s-pod-network.96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:05.563334 containerd[1475]: 2026-03-12 01:25:05.402 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8eb3dd39-a138-4acf-8d82-63348c5ba938", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-wqn8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45a42650f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.563334 containerd[1475]: 2026-03-12 01:25:05.403 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:05.563334 containerd[1475]: 2026-03-12 01:25:05.411 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib45a42650f8 ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:05.563334 containerd[1475]: 2026-03-12 01:25:05.485 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:05.563334 containerd[1475]: 2026-03-12 01:25:05.486 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8eb3dd39-a138-4acf-8d82-63348c5ba938", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c", Pod:"goldmane-cccfbd5cf-wqn8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45a42650f8", MAC:"f2:c1:49:ae:d7:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.563334 containerd[1475]: 2026-03-12 01:25:05.507 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c" Namespace="calico-system" Pod="goldmane-cccfbd5cf-wqn8p" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:25:05.615567 containerd[1475]: time="2026-03-12T01:25:05.611942975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xx7s9,Uid:86acc654-d8de-47c3-aef7-67c1f1950085,Namespace:kube-system,Attempt:1,} returns sandbox id \"17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0\"" Mar 12 01:25:05.622670 kubelet[2546]: E0312 01:25:05.621234 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:05.641730 systemd-networkd[1408]: calib0c15b02321: Link UP Mar 12 01:25:05.676978 containerd[1475]: time="2026-03-12T01:25:05.675510458Z" level=info msg="CreateContainer within sandbox \"17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:25:05.681062 systemd-networkd[1408]: calib0c15b02321: Gained carrier Mar 12 01:25:05.698491 containerd[1475]: time="2026-03-12T01:25:05.695079398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:05.698491 containerd[1475]: time="2026-03-12T01:25:05.696202902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:05.698491 containerd[1475]: time="2026-03-12T01:25:05.696237704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.703579 containerd[1475]: time="2026-03-12T01:25:05.699783593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:04.143 [ERROR][3936] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:04.202 [INFO][3936] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0 calico-apiserver-6cc8b6d44c- calico-system 43f18370-3b65-47ef-902a-559a0936b656 991 0 2026-03-12 01:24:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc8b6d44c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cc8b6d44c-pwhfw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib0c15b02321 [] [] }} ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:04.202 [INFO][3936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:04.654 [INFO][4010] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" HandleID="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:04.699 [INFO][4010] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" HandleID="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6cc8b6d44c-pwhfw", "timestamp":"2026-03-12 01:25:04.654897235 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038cf20)} Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:04.699 [INFO][4010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.320 [INFO][4010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.320 [INFO][4010] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.335 [INFO][4010] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.388 [INFO][4010] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.421 [INFO][4010] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.468 [INFO][4010] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.495 [INFO][4010] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.495 [INFO][4010] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.511 [INFO][4010] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.560 [INFO][4010] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.579 [INFO][4010] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.579 [INFO][4010] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" host="localhost" Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.579 [INFO][4010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:05.744642 containerd[1475]: 2026-03-12 01:25:05.579 [INFO][4010] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" HandleID="k8s-pod-network.789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:05.745695 containerd[1475]: 2026-03-12 01:25:05.589 [INFO][3936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"43f18370-3b65-47ef-902a-559a0936b656", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cc8b6d44c-pwhfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib0c15b02321", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.745695 containerd[1475]: 2026-03-12 01:25:05.589 [INFO][3936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:05.745695 containerd[1475]: 2026-03-12 01:25:05.589 [INFO][3936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0c15b02321 ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:05.745695 containerd[1475]: 2026-03-12 01:25:05.680 [INFO][3936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:05.745695 containerd[1475]: 2026-03-12 01:25:05.691 [INFO][3936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"43f18370-3b65-47ef-902a-559a0936b656", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c", Pod:"calico-apiserver-6cc8b6d44c-pwhfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib0c15b02321", MAC:"72:b3:f0:ac:70:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:05.745695 containerd[1475]: 2026-03-12 01:25:05.735 [INFO][3936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-pwhfw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:25:05.803964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100756103.mount: Deactivated successfully. Mar 12 01:25:05.820537 containerd[1475]: time="2026-03-12T01:25:05.820209270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:05.820818 containerd[1475]: time="2026-03-12T01:25:05.820732463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:05.821396 containerd[1475]: time="2026-03-12T01:25:05.820800667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.821523 containerd[1475]: time="2026-03-12T01:25:05.821370262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.839879 systemd[1]: Started cri-containerd-90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680.scope - libcontainer container 90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680. Mar 12 01:25:05.893859 containerd[1475]: time="2026-03-12T01:25:05.893712557Z" level=info msg="CreateContainer within sandbox \"17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd94484ad1052016bbf24f41cd27aa412c79617c2d1418065c6c2b068a138545\"" Mar 12 01:25:05.928544 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:05.931380 containerd[1475]: time="2026-03-12T01:25:05.931037256Z" level=info msg="StartContainer for \"cd94484ad1052016bbf24f41cd27aa412c79617c2d1418065c6c2b068a138545\"" Mar 12 01:25:05.932435 containerd[1475]: time="2026-03-12T01:25:05.928414939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:05.932435 containerd[1475]: time="2026-03-12T01:25:05.930441880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:05.932435 containerd[1475]: time="2026-03-12T01:25:05.930467798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.932435 containerd[1475]: time="2026-03-12T01:25:05.930655736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:05.952564 systemd-networkd[1408]: cali0d461b4b25c: Link UP Mar 12 01:25:05.956515 systemd-networkd[1408]: cali0d461b4b25c: Gained carrier Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:04.479 [ERROR][4026] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:04.567 [INFO][4026] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--qkd8c-eth0 coredns-66bc5c9577- kube-system ce9a7bc7-7141-49c7-bada-00125a5134ff 997 0 2026-03-12 01:24:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-qkd8c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0d461b4b25c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:04.578 [INFO][4026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:04.770 [INFO][4084] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" HandleID="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:04.816 [INFO][4084] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" HandleID="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059e0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-qkd8c", "timestamp":"2026-03-12 01:25:04.77074291 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005e6000)} Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:04.817 [INFO][4084] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.581 [INFO][4084] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.583 [INFO][4084] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.602 [INFO][4084] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.653 [INFO][4084] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.701 [INFO][4084] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.740 [INFO][4084] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.753 [INFO][4084] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.753 [INFO][4084] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.760 [INFO][4084] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.787 [INFO][4084] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.813 [INFO][4084] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.820 [INFO][4084] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" host="localhost" Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.820 [INFO][4084] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:06.025617 containerd[1475]: 2026-03-12 01:25:05.820 [INFO][4084] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" HandleID="k8s-pod-network.2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:06.027884 containerd[1475]: 2026-03-12 01:25:05.906 [INFO][4026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qkd8c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ce9a7bc7-7141-49c7-bada-00125a5134ff", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-qkd8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d461b4b25c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:06.027884 containerd[1475]: 2026-03-12 01:25:05.906 [INFO][4026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:06.027884 containerd[1475]: 2026-03-12 01:25:05.906 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d461b4b25c ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:06.027884 containerd[1475]: 2026-03-12 01:25:05.960 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:06.027884 containerd[1475]: 2026-03-12 01:25:05.964 [INFO][4026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qkd8c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ce9a7bc7-7141-49c7-bada-00125a5134ff", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a", Pod:"coredns-66bc5c9577-qkd8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d461b4b25c", MAC:"7a:a9:46:4d:30:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:06.027884 containerd[1475]: 2026-03-12 01:25:06.011 [INFO][4026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a" Namespace="kube-system" Pod="coredns-66bc5c9577-qkd8c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:25:06.131396 systemd[1]: Started cri-containerd-96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c.scope - libcontainer container 96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c. Mar 12 01:25:06.156104 systemd-networkd[1408]: cali601e02c4d1b: Link UP Mar 12 01:25:06.159640 systemd-networkd[1408]: cali601e02c4d1b: Gained carrier Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:04.574 [ERROR][4044] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:04.662 [INFO][4044] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0 calico-apiserver-6cc8b6d44c- calico-system 84316a8e-7adb-4ccf-b643-b729c328a05c 995 0 2026-03-12 01:24:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc8b6d44c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cc8b6d44c-rfh26 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali601e02c4d1b [] [] }} ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:04.662 [INFO][4044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:04.792 [INFO][4093] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" HandleID="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:04.823 [INFO][4093] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" HandleID="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004840b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6cc8b6d44c-rfh26", "timestamp":"2026-03-12 01:25:04.792607404 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c8580)} Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:04.823 [INFO][4093] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:05.834 [INFO][4093] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:05.835 [INFO][4093] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:05.848 [INFO][4093] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:05.883 [INFO][4093] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:05.981 [INFO][4093] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:05.989 [INFO][4093] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.000 [INFO][4093] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.002 [INFO][4093] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.030 [INFO][4093] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671 Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.068 [INFO][4093] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.110 [INFO][4093] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.112 [INFO][4093] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" host="localhost" Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.112 [INFO][4093] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:06.255029 containerd[1475]: 2026-03-12 01:25:06.112 [INFO][4093] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" HandleID="k8s-pod-network.8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:06.257134 containerd[1475]: 2026-03-12 01:25:06.135 [INFO][4044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"84316a8e-7adb-4ccf-b643-b729c328a05c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cc8b6d44c-rfh26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601e02c4d1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:06.257134 containerd[1475]: 2026-03-12 01:25:06.135 [INFO][4044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:06.257134 containerd[1475]: 2026-03-12 01:25:06.138 [INFO][4044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali601e02c4d1b ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:06.257134 containerd[1475]: 2026-03-12 01:25:06.161 [INFO][4044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:06.257134 containerd[1475]: 2026-03-12 01:25:06.163 [INFO][4044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"84316a8e-7adb-4ccf-b643-b729c328a05c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671", Pod:"calico-apiserver-6cc8b6d44c-rfh26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601e02c4d1b", MAC:"6e:87:c7:aa:ad:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:06.257134 containerd[1475]: 2026-03-12 01:25:06.209 [INFO][4044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671" Namespace="calico-system" Pod="calico-apiserver-6cc8b6d44c-rfh26" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:25:06.263654 containerd[1475]: time="2026-03-12T01:25:06.259890442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b87b6b94d-6dx4w,Uid:dbe4f146-b6bc-4508-8482-6ee38f916cab,Namespace:calico-system,Attempt:1,} returns sandbox id \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\"" Mar 12 01:25:06.277093 systemd[1]: Started cri-containerd-789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c.scope - libcontainer container 789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c. Mar 12 01:25:06.296590 systemd[1]: Started cri-containerd-cd94484ad1052016bbf24f41cd27aa412c79617c2d1418065c6c2b068a138545.scope - libcontainer container cd94484ad1052016bbf24f41cd27aa412c79617c2d1418065c6c2b068a138545. Mar 12 01:25:06.315552 containerd[1475]: time="2026-03-12T01:25:06.313722813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:06.315552 containerd[1475]: time="2026-03-12T01:25:06.313810220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:06.315552 containerd[1475]: time="2026-03-12T01:25:06.313831810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:06.322424 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:06.334078 containerd[1475]: time="2026-03-12T01:25:06.321108999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:06.438114 systemd[1]: Started cri-containerd-2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a.scope - libcontainer container 2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a. Mar 12 01:25:06.531356 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:06.536346 containerd[1475]: time="2026-03-12T01:25:06.534606611Z" level=info msg="StartContainer for \"cd94484ad1052016bbf24f41cd27aa412c79617c2d1418065c6c2b068a138545\" returns successfully" Mar 12 01:25:06.610797 systemd-networkd[1408]: cali4525debb3a0: Gained IPv6LL Mar 12 01:25:06.639693 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:06.657391 containerd[1475]: time="2026-03-12T01:25:06.656397491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:06.657391 containerd[1475]: time="2026-03-12T01:25:06.656481903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:06.657391 containerd[1475]: time="2026-03-12T01:25:06.656505276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:06.657391 containerd[1475]: time="2026-03-12T01:25:06.656642202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:06.678583 systemd-networkd[1408]: calideb95b4acce: Gained IPv6LL Mar 12 01:25:06.736982 systemd-networkd[1408]: cali3e726c343b6: Gained IPv6LL Mar 12 01:25:06.819783 containerd[1475]: time="2026-03-12T01:25:06.819598015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qkd8c,Uid:ce9a7bc7-7141-49c7-bada-00125a5134ff,Namespace:kube-system,Attempt:1,} returns sandbox id \"2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a\"" Mar 12 01:25:06.826164 kubelet[2546]: E0312 01:25:06.825806 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:06.892772 systemd[1]: Started cri-containerd-8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671.scope - libcontainer container 8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671. Mar 12 01:25:06.899405 containerd[1475]: time="2026-03-12T01:25:06.898718250Z" level=info msg="CreateContainer within sandbox \"2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:25:06.936358 containerd[1475]: time="2026-03-12T01:25:06.935858400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-wqn8p,Uid:8eb3dd39-a138-4acf-8d82-63348c5ba938,Namespace:calico-system,Attempt:1,} returns sandbox id \"96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c\"" Mar 12 01:25:06.945688 containerd[1475]: time="2026-03-12T01:25:06.945227763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-pwhfw,Uid:43f18370-3b65-47ef-902a-559a0936b656,Namespace:calico-system,Attempt:1,} returns sandbox id \"789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c\"" Mar 12 01:25:07.027573 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:07.055227 systemd-networkd[1408]: calib45a42650f8: Gained IPv6LL Mar 12 01:25:07.081706 kubelet[2546]: E0312 01:25:07.081604 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:07.085555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423022194.mount: Deactivated successfully. Mar 12 01:25:07.098104 containerd[1475]: time="2026-03-12T01:25:07.097981071Z" level=info msg="CreateContainer within sandbox \"2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85dd536ca0e49c34107ee87735ce97b8026c6a62931bb8ea7ad20bf970a97d08\"" Mar 12 01:25:07.102331 containerd[1475]: time="2026-03-12T01:25:07.101567968Z" level=info msg="StartContainer for \"85dd536ca0e49c34107ee87735ce97b8026c6a62931bb8ea7ad20bf970a97d08\"" Mar 12 01:25:07.189442 kernel: calico-node[4299]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 01:25:07.209345 containerd[1475]: time="2026-03-12T01:25:07.208445424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8b6d44c-rfh26,Uid:84316a8e-7adb-4ccf-b643-b729c328a05c,Namespace:calico-system,Attempt:1,} returns sandbox id \"8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671\"" Mar 12 01:25:07.245619 systemd-networkd[1408]: cali601e02c4d1b: Gained IPv6LL Mar 12 01:25:07.288013 systemd[1]: Started cri-containerd-85dd536ca0e49c34107ee87735ce97b8026c6a62931bb8ea7ad20bf970a97d08.scope - libcontainer container 85dd536ca0e49c34107ee87735ce97b8026c6a62931bb8ea7ad20bf970a97d08. Mar 12 01:25:07.423863 containerd[1475]: time="2026-03-12T01:25:07.423061492Z" level=info msg="StartContainer for \"85dd536ca0e49c34107ee87735ce97b8026c6a62931bb8ea7ad20bf970a97d08\" returns successfully" Mar 12 01:25:07.436707 systemd-networkd[1408]: calib0c15b02321: Gained IPv6LL Mar 12 01:25:07.949879 systemd-networkd[1408]: cali0d461b4b25c: Gained IPv6LL Mar 12 01:25:08.178361 kubelet[2546]: E0312 01:25:08.178194 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:08.179026 kubelet[2546]: E0312 01:25:08.179007 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:08.210685 kubelet[2546]: I0312 01:25:08.210459 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xx7s9" podStartSLOduration=59.210152641 podStartE2EDuration="59.210152641s" podCreationTimestamp="2026-03-12 01:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:25:07.14605089 +0000 UTC m=+64.760413798" watchObservedRunningTime="2026-03-12 01:25:08.210152641 +0000 UTC m=+65.824515520" Mar 12 01:25:08.216401 kubelet[2546]: I0312 01:25:08.215572 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qkd8c" podStartSLOduration=59.215466085 podStartE2EDuration="59.215466085s" podCreationTimestamp="2026-03-12 01:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:25:08.202883498 +0000 UTC m=+65.817246387" watchObservedRunningTime="2026-03-12 01:25:08.215466085 +0000 UTC m=+65.829828994" Mar 12 01:25:08.504066 systemd-networkd[1408]: vxlan.calico: Link UP Mar 12 01:25:08.505198 systemd-networkd[1408]: vxlan.calico: Gained carrier Mar 12 01:25:09.194510 kubelet[2546]: E0312 01:25:09.194091 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:09.198909 kubelet[2546]: E0312 01:25:09.198508 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:09.608367 containerd[1475]: time="2026-03-12T01:25:09.607927574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:09.610665 containerd[1475]: time="2026-03-12T01:25:09.610420698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 01:25:09.612728 containerd[1475]: time="2026-03-12T01:25:09.612562931Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:09.635114 containerd[1475]: time="2026-03-12T01:25:09.634887011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:09.637115 containerd[1475]: time="2026-03-12T01:25:09.636997112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.30716034s" Mar 12 01:25:09.637115 containerd[1475]: time="2026-03-12T01:25:09.637090270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 01:25:09.639914 containerd[1475]: time="2026-03-12T01:25:09.639207919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 01:25:09.667951 containerd[1475]: time="2026-03-12T01:25:09.667848938Z" level=info msg="CreateContainer within sandbox \"69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 01:25:09.704721 containerd[1475]: time="2026-03-12T01:25:09.704498317Z" level=info msg="CreateContainer within sandbox \"69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d0bfbd41660e3928e0ba791bb24b3c4cc9e0e76cd7b8cb5c0fee164414d32592\"" Mar 12 01:25:09.706909 containerd[1475]: time="2026-03-12T01:25:09.706807213Z" level=info msg="StartContainer for \"d0bfbd41660e3928e0ba791bb24b3c4cc9e0e76cd7b8cb5c0fee164414d32592\"" Mar 12 01:25:09.775589 systemd[1]: Started cri-containerd-d0bfbd41660e3928e0ba791bb24b3c4cc9e0e76cd7b8cb5c0fee164414d32592.scope - libcontainer container d0bfbd41660e3928e0ba791bb24b3c4cc9e0e76cd7b8cb5c0fee164414d32592. Mar 12 01:25:09.871595 systemd-networkd[1408]: vxlan.calico: Gained IPv6LL Mar 12 01:25:09.887100 containerd[1475]: time="2026-03-12T01:25:09.885073415Z" level=info msg="StartContainer for \"d0bfbd41660e3928e0ba791bb24b3c4cc9e0e76cd7b8cb5c0fee164414d32592\" returns successfully" Mar 12 01:25:10.203332 kubelet[2546]: E0312 01:25:10.202688 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:10.226585 kubelet[2546]: I0312 01:25:10.226151 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84b6b7d8f4-gct62" podStartSLOduration=36.912045674 podStartE2EDuration="41.226125056s" podCreationTimestamp="2026-03-12 01:24:29 +0000 UTC" firstStartedPulling="2026-03-12 01:25:05.324826244 +0000 UTC m=+62.939189133" lastFinishedPulling="2026-03-12 01:25:09.638905626 +0000 UTC m=+67.253268515" observedRunningTime="2026-03-12 01:25:10.225378857 +0000 UTC m=+67.839741766" watchObservedRunningTime="2026-03-12 01:25:10.226125056 +0000 UTC m=+67.840487935" Mar 12 01:25:10.405474 containerd[1475]: time="2026-03-12T01:25:10.405155021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:10.406918 containerd[1475]: time="2026-03-12T01:25:10.406816716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 01:25:10.408466 containerd[1475]: time="2026-03-12T01:25:10.408373435Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:10.412041 containerd[1475]: time="2026-03-12T01:25:10.411948619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:10.413229 containerd[1475]: time="2026-03-12T01:25:10.413153264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 773.79347ms" Mar 12 01:25:10.413229 containerd[1475]: time="2026-03-12T01:25:10.413217900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 01:25:10.416751 containerd[1475]: time="2026-03-12T01:25:10.416548858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 01:25:10.422600 containerd[1475]: time="2026-03-12T01:25:10.422443563Z" level=info msg="CreateContainer within sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:25:10.444194 containerd[1475]: time="2026-03-12T01:25:10.444106905Z" level=info msg="CreateContainer within sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\"" Mar 12 01:25:10.446391 containerd[1475]: time="2026-03-12T01:25:10.446160107Z" level=info msg="StartContainer for \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\"" Mar 12 01:25:10.498753 systemd[1]: Started cri-containerd-fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874.scope - libcontainer container fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874. Mar 12 01:25:10.581185 containerd[1475]: time="2026-03-12T01:25:10.581026985Z" level=info msg="StartContainer for \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\" returns successfully" Mar 12 01:25:11.922038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616574347.mount: Deactivated successfully. Mar 12 01:25:12.912065 containerd[1475]: time="2026-03-12T01:25:12.911484284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:12.915957 containerd[1475]: time="2026-03-12T01:25:12.913208174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 01:25:12.916518 containerd[1475]: time="2026-03-12T01:25:12.916443372Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:12.922213 containerd[1475]: time="2026-03-12T01:25:12.922099617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:12.923814 containerd[1475]: time="2026-03-12T01:25:12.923356392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.506713482s" Mar 12 01:25:12.923814 containerd[1475]: time="2026-03-12T01:25:12.923404709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 01:25:12.928222 containerd[1475]: time="2026-03-12T01:25:12.928101531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:25:12.935684 containerd[1475]: time="2026-03-12T01:25:12.935478404Z" level=info msg="CreateContainer within sandbox \"96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 01:25:12.970802 containerd[1475]: time="2026-03-12T01:25:12.970690147Z" level=info msg="CreateContainer within sandbox \"96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f7932bac2266facf24d46161b16bdb9c958866314c1a517ea223e92c9b007c89\"" Mar 12 01:25:12.972854 containerd[1475]: time="2026-03-12T01:25:12.972732142Z" level=info msg="StartContainer for \"f7932bac2266facf24d46161b16bdb9c958866314c1a517ea223e92c9b007c89\"" Mar 12 01:25:13.055659 systemd[1]: Started cri-containerd-f7932bac2266facf24d46161b16bdb9c958866314c1a517ea223e92c9b007c89.scope - libcontainer container f7932bac2266facf24d46161b16bdb9c958866314c1a517ea223e92c9b007c89. Mar 12 01:25:13.232962 containerd[1475]: time="2026-03-12T01:25:13.232341950Z" level=info msg="StartContainer for \"f7932bac2266facf24d46161b16bdb9c958866314c1a517ea223e92c9b007c89\" returns successfully" Mar 12 01:25:13.549773 containerd[1475]: time="2026-03-12T01:25:13.549710559Z" level=info msg="StopPodSandbox for \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\"" Mar 12 01:25:13.813239 kubelet[2546]: I0312 01:25:13.811030 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-wqn8p" podStartSLOduration=39.839146332 podStartE2EDuration="45.811006579s" podCreationTimestamp="2026-03-12 01:24:28 +0000 UTC" firstStartedPulling="2026-03-12 01:25:06.955804723 +0000 UTC m=+64.570167602" lastFinishedPulling="2026-03-12 01:25:12.92766497 +0000 UTC m=+70.542027849" observedRunningTime="2026-03-12 01:25:13.31891319 +0000 UTC m=+70.933276089" watchObservedRunningTime="2026-03-12 01:25:13.811006579 +0000 UTC m=+71.425369459" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.812 [INFO][4971] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.812 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" iface="eth0" netns="/var/run/netns/cni-3e72aaba-3a82-f80b-3493-2fc3ca2649f4" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.813 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" iface="eth0" netns="/var/run/netns/cni-3e72aaba-3a82-f80b-3493-2fc3ca2649f4" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.815 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" iface="eth0" netns="/var/run/netns/cni-3e72aaba-3a82-f80b-3493-2fc3ca2649f4" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.815 [INFO][4971] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.815 [INFO][4971] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.920 [INFO][4980] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.921 [INFO][4980] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:13.921 [INFO][4980] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:14.001 [WARNING][4980] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:14.002 [INFO][4980] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:14.010 [INFO][4980] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:14.028945 containerd[1475]: 2026-03-12 01:25:14.023 [INFO][4971] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:25:14.032131 containerd[1475]: time="2026-03-12T01:25:14.031569281Z" level=info msg="TearDown network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\" successfully" Mar 12 01:25:14.032131 containerd[1475]: time="2026-03-12T01:25:14.031696922Z" level=info msg="StopPodSandbox for \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\" returns successfully" Mar 12 01:25:14.037885 systemd[1]: run-netns-cni\x2d3e72aaba\x2d3a82\x2df80b\x2d3493\x2d2fc3ca2649f4.mount: Deactivated successfully. Mar 12 01:25:14.139718 containerd[1475]: time="2026-03-12T01:25:14.139332788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5nns,Uid:ed55f459-74a9-4d53-811f-2b6098967bb3,Namespace:calico-system,Attempt:1,}" Mar 12 01:25:14.617190 systemd-networkd[1408]: calibdd79494091: Link UP Mar 12 01:25:14.617649 systemd-networkd[1408]: calibdd79494091: Gained carrier Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.343 [INFO][4989] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s5nns-eth0 csi-node-driver- calico-system ed55f459-74a9-4d53-811f-2b6098967bb3 1108 0 2026-03-12 01:24:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s5nns eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibdd79494091 [] [] }} ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.343 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.441 [INFO][5020] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" HandleID="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.472 [INFO][5020] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" HandleID="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s5nns", "timestamp":"2026-03-12 01:25:14.441239141 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00068af20)} Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.472 [INFO][5020] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.472 [INFO][5020] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.472 [INFO][5020] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.499 [INFO][5020] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.509 [INFO][5020] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.527 [INFO][5020] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.532 [INFO][5020] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.539 [INFO][5020] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.539 [INFO][5020] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.542 [INFO][5020] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3 Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.549 [INFO][5020] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.598 [INFO][5020] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.600 [INFO][5020] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" host="localhost" Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.600 [INFO][5020] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:14.646843 containerd[1475]: 2026-03-12 01:25:14.600 [INFO][5020] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" HandleID="k8s-pod-network.57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.647640 containerd[1475]: 2026-03-12 01:25:14.607 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5nns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ed55f459-74a9-4d53-811f-2b6098967bb3", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s5nns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdd79494091", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:14.647640 containerd[1475]: 2026-03-12 01:25:14.608 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.647640 containerd[1475]: 2026-03-12 01:25:14.608 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdd79494091 ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.647640 containerd[1475]: 2026-03-12 01:25:14.622 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.647640 containerd[1475]: 2026-03-12 01:25:14.623 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5nns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ed55f459-74a9-4d53-811f-2b6098967bb3", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3", Pod:"csi-node-driver-s5nns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdd79494091", MAC:"ca:0d:0d:90:23:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:14.647640 containerd[1475]: 2026-03-12 01:25:14.638 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3" Namespace="calico-system" Pod="csi-node-driver-s5nns" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:25:14.724030 containerd[1475]: time="2026-03-12T01:25:14.723869991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:14.724363 containerd[1475]: time="2026-03-12T01:25:14.724089579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:14.724722 containerd[1475]: time="2026-03-12T01:25:14.724213704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:14.724814 containerd[1475]: time="2026-03-12T01:25:14.724659121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:14.789745 systemd[1]: Started cri-containerd-57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3.scope - libcontainer container 57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3. Mar 12 01:25:14.811699 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:14.844118 containerd[1475]: time="2026-03-12T01:25:14.844047175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5nns,Uid:ed55f459-74a9-4d53-811f-2b6098967bb3,Namespace:calico-system,Attempt:1,} returns sandbox id \"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3\"" Mar 12 01:25:15.453790 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:40636.service - OpenSSH per-connection server daemon (10.0.0.1:40636). Mar 12 01:25:15.570972 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 40636 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:15.574736 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:15.594756 systemd-logind[1453]: New session 8 of user core. Mar 12 01:25:15.599724 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:25:16.141538 sshd[5135]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:16.146792 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:40636.service: Deactivated successfully. Mar 12 01:25:16.150686 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:25:16.154776 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:25:16.157485 systemd-logind[1453]: Removed session 8. Mar 12 01:25:16.268958 systemd-networkd[1408]: calibdd79494091: Gained IPv6LL Mar 12 01:25:16.663435 containerd[1475]: time="2026-03-12T01:25:16.662406619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:16.664140 containerd[1475]: time="2026-03-12T01:25:16.664054954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 01:25:16.666564 containerd[1475]: time="2026-03-12T01:25:16.666395819Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:16.670372 containerd[1475]: time="2026-03-12T01:25:16.670216187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:16.671736 containerd[1475]: time="2026-03-12T01:25:16.671654487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.743459355s" Mar 12 01:25:16.671794 containerd[1475]: time="2026-03-12T01:25:16.671745772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:25:16.673865 containerd[1475]: time="2026-03-12T01:25:16.673465431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:25:16.680952 containerd[1475]: time="2026-03-12T01:25:16.680875117Z" level=info msg="CreateContainer within sandbox \"789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:25:16.704546 containerd[1475]: time="2026-03-12T01:25:16.704242308Z" level=info msg="CreateContainer within sandbox \"789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"155d3ee5d6fff08104772e71bbf60d9c0bb306e7966fc21825df6cf90f0b5f79\"" Mar 12 01:25:16.705752 containerd[1475]: time="2026-03-12T01:25:16.705243422Z" level=info msg="StartContainer for \"155d3ee5d6fff08104772e71bbf60d9c0bb306e7966fc21825df6cf90f0b5f79\"" Mar 12 01:25:16.816428 containerd[1475]: time="2026-03-12T01:25:16.813903566Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:16.816428 containerd[1475]: time="2026-03-12T01:25:16.815384914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 01:25:16.822199 containerd[1475]: time="2026-03-12T01:25:16.822156400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 148.653859ms" Mar 12 01:25:16.822462 containerd[1475]: time="2026-03-12T01:25:16.822441357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:25:16.825495 containerd[1475]: time="2026-03-12T01:25:16.825415900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 01:25:16.835579 containerd[1475]: time="2026-03-12T01:25:16.835526343Z" level=info msg="CreateContainer within sandbox \"8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:25:16.845532 systemd[1]: Started cri-containerd-155d3ee5d6fff08104772e71bbf60d9c0bb306e7966fc21825df6cf90f0b5f79.scope - libcontainer container 155d3ee5d6fff08104772e71bbf60d9c0bb306e7966fc21825df6cf90f0b5f79. Mar 12 01:25:16.869783 containerd[1475]: time="2026-03-12T01:25:16.869543382Z" level=info msg="CreateContainer within sandbox \"8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"13924d0f9a47a030878a1630320117aa0fda63299afeac3040044308bbb72b1e\"" Mar 12 01:25:16.870810 containerd[1475]: time="2026-03-12T01:25:16.870728320Z" level=info msg="StartContainer for \"13924d0f9a47a030878a1630320117aa0fda63299afeac3040044308bbb72b1e\"" Mar 12 01:25:16.942586 systemd[1]: Started cri-containerd-13924d0f9a47a030878a1630320117aa0fda63299afeac3040044308bbb72b1e.scope - libcontainer container 13924d0f9a47a030878a1630320117aa0fda63299afeac3040044308bbb72b1e. Mar 12 01:25:16.967245 containerd[1475]: time="2026-03-12T01:25:16.967114888Z" level=info msg="StartContainer for \"155d3ee5d6fff08104772e71bbf60d9c0bb306e7966fc21825df6cf90f0b5f79\" returns successfully" Mar 12 01:25:17.092692 containerd[1475]: time="2026-03-12T01:25:17.092631289Z" level=info msg="StartContainer for \"13924d0f9a47a030878a1630320117aa0fda63299afeac3040044308bbb72b1e\" returns successfully" Mar 12 01:25:17.335180 kubelet[2546]: I0312 01:25:17.335071 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cc8b6d44c-rfh26" podStartSLOduration=39.725027321 podStartE2EDuration="49.335048673s" podCreationTimestamp="2026-03-12 01:24:28 +0000 UTC" firstStartedPulling="2026-03-12 01:25:07.214001156 +0000 UTC m=+64.828364035" lastFinishedPulling="2026-03-12 01:25:16.824022508 +0000 UTC m=+74.438385387" observedRunningTime="2026-03-12 01:25:17.334678672 +0000 UTC m=+74.949041581" watchObservedRunningTime="2026-03-12 01:25:17.335048673 +0000 UTC m=+74.949411562" Mar 12 01:25:18.329202 kubelet[2546]: I0312 01:25:18.328985 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:25:18.330094 kubelet[2546]: I0312 01:25:18.329475 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:25:18.956558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475204963.mount: Deactivated successfully. Mar 12 01:25:18.992897 containerd[1475]: time="2026-03-12T01:25:18.992638136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:18.995350 containerd[1475]: time="2026-03-12T01:25:18.994948914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 01:25:18.996846 containerd[1475]: time="2026-03-12T01:25:18.996733494Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:19.001078 containerd[1475]: time="2026-03-12T01:25:19.000977183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:19.001857 containerd[1475]: time="2026-03-12T01:25:19.001802782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.176326922s" Mar 12 01:25:19.001938 containerd[1475]: time="2026-03-12T01:25:19.001858292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 01:25:19.007049 containerd[1475]: time="2026-03-12T01:25:19.006886623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 01:25:19.013830 containerd[1475]: time="2026-03-12T01:25:19.013571169Z" level=info msg="CreateContainer within sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:25:19.046417 containerd[1475]: time="2026-03-12T01:25:19.046191989Z" level=info msg="CreateContainer within sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\"" Mar 12 01:25:19.048584 containerd[1475]: time="2026-03-12T01:25:19.048128578Z" level=info msg="StartContainer for \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\"" Mar 12 01:25:19.199628 systemd[1]: Started cri-containerd-b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1.scope - libcontainer container b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1. Mar 12 01:25:19.341371 containerd[1475]: time="2026-03-12T01:25:19.341150530Z" level=info msg="StartContainer for \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\" returns successfully" Mar 12 01:25:19.548489 kubelet[2546]: E0312 01:25:19.548120 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:19.884418 containerd[1475]: time="2026-03-12T01:25:19.884350934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:19.885525 containerd[1475]: time="2026-03-12T01:25:19.885435455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 01:25:19.888354 containerd[1475]: time="2026-03-12T01:25:19.887711458Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:19.892417 containerd[1475]: time="2026-03-12T01:25:19.892362912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:19.893702 containerd[1475]: time="2026-03-12T01:25:19.893541794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 886.571529ms" Mar 12 01:25:19.893702 containerd[1475]: time="2026-03-12T01:25:19.893611772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 01:25:19.901081 containerd[1475]: time="2026-03-12T01:25:19.900894292Z" level=info msg="CreateContainer within sandbox \"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 01:25:19.922777 containerd[1475]: time="2026-03-12T01:25:19.922685067Z" level=info msg="CreateContainer within sandbox \"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2ab904b486158081516f334918909124bfb58a93f5cdb34bc44c7f7fae191b6b\"" Mar 12 01:25:19.924473 containerd[1475]: time="2026-03-12T01:25:19.924442230Z" level=info msg="StartContainer for \"2ab904b486158081516f334918909124bfb58a93f5cdb34bc44c7f7fae191b6b\"" Mar 12 01:25:19.984570 systemd[1]: Started cri-containerd-2ab904b486158081516f334918909124bfb58a93f5cdb34bc44c7f7fae191b6b.scope - libcontainer container 2ab904b486158081516f334918909124bfb58a93f5cdb34bc44c7f7fae191b6b. Mar 12 01:25:20.056552 kubelet[2546]: I0312 01:25:20.056396 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:25:20.058046 containerd[1475]: time="2026-03-12T01:25:20.057878041Z" level=info msg="StartContainer for \"2ab904b486158081516f334918909124bfb58a93f5cdb34bc44c7f7fae191b6b\" returns successfully" Mar 12 01:25:20.061905 containerd[1475]: time="2026-03-12T01:25:20.061797689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 01:25:20.106146 kubelet[2546]: I0312 01:25:20.105686 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cc8b6d44c-pwhfw" podStartSLOduration=42.402172752 podStartE2EDuration="52.105658414s" podCreationTimestamp="2026-03-12 01:24:28 +0000 UTC" firstStartedPulling="2026-03-12 01:25:06.969737162 +0000 UTC m=+64.584100051" lastFinishedPulling="2026-03-12 01:25:16.673222824 +0000 UTC m=+74.287585713" observedRunningTime="2026-03-12 01:25:17.376856848 +0000 UTC m=+74.991219747" watchObservedRunningTime="2026-03-12 01:25:20.105658414 +0000 UTC m=+77.720021473" Mar 12 01:25:20.494929 containerd[1475]: time="2026-03-12T01:25:20.494798141Z" level=info msg="StopContainer for \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\" with timeout 30 (s)" Mar 12 01:25:20.495202 containerd[1475]: time="2026-03-12T01:25:20.494963934Z" level=info msg="StopContainer for \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\" with timeout 30 (s)" Mar 12 01:25:20.499289 containerd[1475]: time="2026-03-12T01:25:20.499115500Z" level=info msg="Stop container \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\" with signal terminated" Mar 12 01:25:20.500688 containerd[1475]: time="2026-03-12T01:25:20.500660194Z" level=info msg="Stop container \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\" with signal terminated" Mar 12 01:25:20.519736 systemd[1]: cri-containerd-b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1.scope: Deactivated successfully. Mar 12 01:25:20.551242 systemd[1]: cri-containerd-fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874.scope: Deactivated successfully. Mar 12 01:25:20.558578 kubelet[2546]: E0312 01:25:20.558421 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:20.586018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1-rootfs.mount: Deactivated successfully. Mar 12 01:25:20.606069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874-rootfs.mount: Deactivated successfully. Mar 12 01:25:20.608392 containerd[1475]: time="2026-03-12T01:25:20.586450450Z" level=info msg="shim disconnected" id=b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1 namespace=k8s.io Mar 12 01:25:20.608503 containerd[1475]: time="2026-03-12T01:25:20.608405594Z" level=warning msg="cleaning up after shim disconnected" id=b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1 namespace=k8s.io Mar 12 01:25:20.608503 containerd[1475]: time="2026-03-12T01:25:20.608439877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:25:20.608876 containerd[1475]: time="2026-03-12T01:25:20.603522770Z" level=info msg="shim disconnected" id=fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874 namespace=k8s.io Mar 12 01:25:20.608936 containerd[1475]: time="2026-03-12T01:25:20.608866373Z" level=warning msg="cleaning up after shim disconnected" id=fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874 namespace=k8s.io Mar 12 01:25:20.608936 containerd[1475]: time="2026-03-12T01:25:20.608894525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:25:20.754928 containerd[1475]: time="2026-03-12T01:25:20.754746902Z" level=info msg="StopContainer for \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\" returns successfully" Mar 12 01:25:20.763805 containerd[1475]: time="2026-03-12T01:25:20.763680036Z" level=info msg="StopContainer for \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\" returns successfully" Mar 12 01:25:20.768482 containerd[1475]: time="2026-03-12T01:25:20.768109690Z" level=info msg="StopPodSandbox for \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\"" Mar 12 01:25:20.768482 containerd[1475]: time="2026-03-12T01:25:20.768242330Z" level=info msg="Container to stop \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:25:20.768482 containerd[1475]: time="2026-03-12T01:25:20.768351760Z" level=info msg="Container to stop \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 01:25:20.773717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680-shm.mount: Deactivated successfully. Mar 12 01:25:20.798562 systemd[1]: cri-containerd-90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680.scope: Deactivated successfully. Mar 12 01:25:20.838745 containerd[1475]: time="2026-03-12T01:25:20.838416279Z" level=info msg="shim disconnected" id=90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680 namespace=k8s.io Mar 12 01:25:20.838745 containerd[1475]: time="2026-03-12T01:25:20.838498419Z" level=warning msg="cleaning up after shim disconnected" id=90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680 namespace=k8s.io Mar 12 01:25:20.838745 containerd[1475]: time="2026-03-12T01:25:20.838513436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:25:20.956869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680-rootfs.mount: Deactivated successfully. Mar 12 01:25:20.998771 kubelet[2546]: I0312 01:25:20.998455 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b87b6b94d-6dx4w" podStartSLOduration=35.273216236 podStartE2EDuration="47.998431323s" podCreationTimestamp="2026-03-12 01:24:33 +0000 UTC" firstStartedPulling="2026-03-12 01:25:06.278678908 +0000 UTC m=+63.893041787" lastFinishedPulling="2026-03-12 01:25:19.003893985 +0000 UTC m=+76.618256874" observedRunningTime="2026-03-12 01:25:20.389919379 +0000 UTC m=+78.004282278" watchObservedRunningTime="2026-03-12 01:25:20.998431323 +0000 UTC m=+78.612794201" Mar 12 01:25:20.999453 systemd-networkd[1408]: cali3e726c343b6: Link DOWN Mar 12 01:25:20.999502 systemd-networkd[1408]: cali3e726c343b6: Lost carrier Mar 12 01:25:21.174094 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:40652.service - OpenSSH per-connection server daemon (10.0.0.1:40652). Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:20.995 [INFO][5476] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:20.997 [INFO][5476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" iface="eth0" netns="/var/run/netns/cni-93698463-e7f8-1be8-2c7e-6fdf57ae09dc" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:20.997 [INFO][5476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" iface="eth0" netns="/var/run/netns/cni-93698463-e7f8-1be8-2c7e-6fdf57ae09dc" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.016 [INFO][5476] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" after=19.220643ms iface="eth0" netns="/var/run/netns/cni-93698463-e7f8-1be8-2c7e-6fdf57ae09dc" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.016 [INFO][5476] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.016 [INFO][5476] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.120 [INFO][5489] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.122 [INFO][5489] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.122 [INFO][5489] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.244 [INFO][5489] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.244 [INFO][5489] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.250 [INFO][5489] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:21.263241 containerd[1475]: 2026-03-12 01:25:21.256 [INFO][5476] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:25:21.272330 containerd[1475]: time="2026-03-12T01:25:21.271964997Z" level=info msg="TearDown network for sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" successfully" Mar 12 01:25:21.272330 containerd[1475]: time="2026-03-12T01:25:21.272041767Z" level=info msg="StopPodSandbox for \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" returns successfully" Mar 12 01:25:21.273393 containerd[1475]: time="2026-03-12T01:25:21.273234541Z" level=info msg="StopPodSandbox for \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\"" Mar 12 01:25:21.277593 systemd[1]: run-netns-cni\x2d93698463\x2de7f8\x2d1be8\x2d2c7e\x2d6fdf57ae09dc.mount: Deactivated successfully. Mar 12 01:25:21.297566 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 40652 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:21.299801 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:21.313028 systemd-logind[1453]: New session 9 of user core. Mar 12 01:25:21.317803 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:25:21.384722 kubelet[2546]: I0312 01:25:21.376362 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.371 [WARNING][5528] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0", GenerateName:"whisker-7b87b6b94d-", Namespace:"calico-system", SelfLink:"", UID:"dbe4f146-b6bc-4508-8482-6ee38f916cab", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b87b6b94d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680", Pod:"whisker-7b87b6b94d-6dx4w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3e726c343b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.372 [INFO][5528] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.372 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" iface="eth0" netns="" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.372 [INFO][5528] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.372 [INFO][5528] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.422 [INFO][5537] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.422 [INFO][5537] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.422 [INFO][5537] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.433 [WARNING][5537] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.433 [INFO][5537] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.437 [INFO][5537] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:21.448537 containerd[1475]: 2026-03-12 01:25:21.442 [INFO][5528] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:25:21.448537 containerd[1475]: time="2026-03-12T01:25:21.447793187Z" level=info msg="TearDown network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" successfully" Mar 12 01:25:21.448537 containerd[1475]: time="2026-03-12T01:25:21.447832058Z" level=info msg="StopPodSandbox for \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" returns successfully" Mar 12 01:25:21.559698 kubelet[2546]: I0312 01:25:21.558726 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-backend-key-pair\") pod \"dbe4f146-b6bc-4508-8482-6ee38f916cab\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " Mar 12 01:25:21.559698 kubelet[2546]: I0312 01:25:21.558836 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-ca-bundle\") pod \"dbe4f146-b6bc-4508-8482-6ee38f916cab\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " Mar 12 01:25:21.559698 kubelet[2546]: I0312 01:25:21.558866 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lrtr\" (UniqueName: \"kubernetes.io/projected/dbe4f146-b6bc-4508-8482-6ee38f916cab-kube-api-access-2lrtr\") pod \"dbe4f146-b6bc-4508-8482-6ee38f916cab\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " Mar 12 01:25:21.559698 kubelet[2546]: I0312 01:25:21.558895 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-nginx-config\") pod \"dbe4f146-b6bc-4508-8482-6ee38f916cab\" (UID: \"dbe4f146-b6bc-4508-8482-6ee38f916cab\") " Mar 12 01:25:21.577612 kubelet[2546]: I0312 01:25:21.569445 2546 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dbe4f146-b6bc-4508-8482-6ee38f916cab" (UID: "dbe4f146-b6bc-4508-8482-6ee38f916cab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:25:21.577781 systemd[1]: var-lib-kubelet-pods-dbe4f146\x2db6bc\x2d4508\x2d8482\x2d6ee38f916cab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 01:25:21.583824 kubelet[2546]: I0312 01:25:21.577033 2546 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "dbe4f146-b6bc-4508-8482-6ee38f916cab" (UID: "dbe4f146-b6bc-4508-8482-6ee38f916cab"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:25:21.583824 kubelet[2546]: I0312 01:25:21.580052 2546 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dbe4f146-b6bc-4508-8482-6ee38f916cab" (UID: "dbe4f146-b6bc-4508-8482-6ee38f916cab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:25:21.583824 kubelet[2546]: I0312 01:25:21.582197 2546 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe4f146-b6bc-4508-8482-6ee38f916cab-kube-api-access-2lrtr" (OuterVolumeSpecName: "kube-api-access-2lrtr") pod "dbe4f146-b6bc-4508-8482-6ee38f916cab" (UID: "dbe4f146-b6bc-4508-8482-6ee38f916cab"). InnerVolumeSpecName "kube-api-access-2lrtr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:25:21.587774 systemd[1]: var-lib-kubelet-pods-dbe4f146\x2db6bc\x2d4508\x2d8482\x2d6ee38f916cab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2lrtr.mount: Deactivated successfully. Mar 12 01:25:21.660055 kubelet[2546]: I0312 01:25:21.659997 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 12 01:25:21.660055 kubelet[2546]: I0312 01:25:21.660040 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 12 01:25:21.660055 kubelet[2546]: I0312 01:25:21.660050 2546 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2lrtr\" (UniqueName: \"kubernetes.io/projected/dbe4f146-b6bc-4508-8482-6ee38f916cab-kube-api-access-2lrtr\") on node \"localhost\" DevicePath \"\"" Mar 12 01:25:21.660055 kubelet[2546]: I0312 01:25:21.660061 2546 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbe4f146-b6bc-4508-8482-6ee38f916cab-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 12 01:25:21.716833 sshd[5510]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:21.725368 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:40652.service: Deactivated successfully. Mar 12 01:25:21.729959 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:25:21.731500 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:25:21.733550 systemd-logind[1453]: Removed session 9. Mar 12 01:25:22.287147 containerd[1475]: time="2026-03-12T01:25:22.286978842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:22.288742 containerd[1475]: time="2026-03-12T01:25:22.288610512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 01:25:22.290592 containerd[1475]: time="2026-03-12T01:25:22.290529599Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:22.294515 containerd[1475]: time="2026-03-12T01:25:22.294223167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:25:22.295473 containerd[1475]: time="2026-03-12T01:25:22.295398636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.233562368s" Mar 12 01:25:22.295473 containerd[1475]: time="2026-03-12T01:25:22.295470767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 01:25:22.305566 containerd[1475]: time="2026-03-12T01:25:22.305451860Z" level=info msg="CreateContainer within sandbox \"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 01:25:22.326107 containerd[1475]: time="2026-03-12T01:25:22.325896542Z" level=info msg="CreateContainer within sandbox \"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3ab5155c6d9c275b59a665ab9463cef184abc3f3d0b957834acacbb355d48ff1\"" Mar 12 01:25:22.327815 containerd[1475]: time="2026-03-12T01:25:22.327658942Z" level=info msg="StartContainer for \"3ab5155c6d9c275b59a665ab9463cef184abc3f3d0b957834acacbb355d48ff1\"" Mar 12 01:25:22.384586 systemd[1]: Started cri-containerd-3ab5155c6d9c275b59a665ab9463cef184abc3f3d0b957834acacbb355d48ff1.scope - libcontainer container 3ab5155c6d9c275b59a665ab9463cef184abc3f3d0b957834acacbb355d48ff1. Mar 12 01:25:22.409619 systemd[1]: Removed slice kubepods-besteffort-poddbe4f146_b6bc_4508_8482_6ee38f916cab.slice - libcontainer container kubepods-besteffort-poddbe4f146_b6bc_4508_8482_6ee38f916cab.slice. Mar 12 01:25:22.493999 containerd[1475]: time="2026-03-12T01:25:22.493530902Z" level=info msg="StartContainer for \"3ab5155c6d9c275b59a665ab9463cef184abc3f3d0b957834acacbb355d48ff1\" returns successfully" Mar 12 01:25:22.557479 systemd[1]: Created slice kubepods-besteffort-pod59bffcb1_b602_427e_b59e_3a71a71f1f12.slice - libcontainer container kubepods-besteffort-pod59bffcb1_b602_427e_b59e_3a71a71f1f12.slice. Mar 12 01:25:22.566883 kubelet[2546]: I0312 01:25:22.566760 2546 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe4f146-b6bc-4508-8482-6ee38f916cab" path="/var/lib/kubelet/pods/dbe4f146-b6bc-4508-8482-6ee38f916cab/volumes" Mar 12 01:25:22.572165 kubelet[2546]: I0312 01:25:22.571988 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtznq\" (UniqueName: \"kubernetes.io/projected/59bffcb1-b602-427e-b59e-3a71a71f1f12-kube-api-access-xtznq\") pod \"whisker-88cd9787c-mzmcn\" (UID: \"59bffcb1-b602-427e-b59e-3a71a71f1f12\") " pod="calico-system/whisker-88cd9787c-mzmcn" Mar 12 01:25:22.572165 kubelet[2546]: I0312 01:25:22.572144 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59bffcb1-b602-427e-b59e-3a71a71f1f12-whisker-ca-bundle\") pod \"whisker-88cd9787c-mzmcn\" (UID: \"59bffcb1-b602-427e-b59e-3a71a71f1f12\") " pod="calico-system/whisker-88cd9787c-mzmcn" Mar 12 01:25:22.572603 kubelet[2546]: I0312 01:25:22.572188 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/59bffcb1-b602-427e-b59e-3a71a71f1f12-nginx-config\") pod \"whisker-88cd9787c-mzmcn\" (UID: \"59bffcb1-b602-427e-b59e-3a71a71f1f12\") " pod="calico-system/whisker-88cd9787c-mzmcn" Mar 12 01:25:22.574096 kubelet[2546]: I0312 01:25:22.572222 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/59bffcb1-b602-427e-b59e-3a71a71f1f12-whisker-backend-key-pair\") pod \"whisker-88cd9787c-mzmcn\" (UID: \"59bffcb1-b602-427e-b59e-3a71a71f1f12\") " pod="calico-system/whisker-88cd9787c-mzmcn" Mar 12 01:25:22.721827 kubelet[2546]: I0312 01:25:22.721722 2546 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 01:25:22.723437 kubelet[2546]: I0312 01:25:22.723195 2546 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 01:25:22.871126 containerd[1475]: time="2026-03-12T01:25:22.870578360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88cd9787c-mzmcn,Uid:59bffcb1-b602-427e-b59e-3a71a71f1f12,Namespace:calico-system,Attempt:0,}" Mar 12 01:25:23.338838 systemd-networkd[1408]: cali96c4880026e: Link UP Mar 12 01:25:23.342473 systemd-networkd[1408]: cali96c4880026e: Gained carrier Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:22.959 [INFO][5604] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--88cd9787c--mzmcn-eth0 whisker-88cd9787c- calico-system 59bffcb1-b602-427e-b59e-3a71a71f1f12 1249 0 2026-03-12 01:25:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:88cd9787c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-88cd9787c-mzmcn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali96c4880026e [] [] }} ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:22.960 [INFO][5604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.098 [INFO][5616] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" HandleID="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Workload="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.115 [INFO][5616] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" HandleID="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Workload="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-88cd9787c-mzmcn", "timestamp":"2026-03-12 01:25:23.098966864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002eadc0)} Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.117 [INFO][5616] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.117 [INFO][5616] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.117 [INFO][5616] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.165 [INFO][5616] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.184 [INFO][5616] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.194 [INFO][5616] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.201 [INFO][5616] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.210 [INFO][5616] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.211 [INFO][5616] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.236 [INFO][5616] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509 Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.251 [INFO][5616] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.323 [INFO][5616] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.323 [INFO][5616] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" host="localhost" Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.323 [INFO][5616] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:25:23.365746 containerd[1475]: 2026-03-12 01:25:23.323 [INFO][5616] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" HandleID="k8s-pod-network.a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Workload="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" Mar 12 01:25:23.366794 containerd[1475]: 2026-03-12 01:25:23.329 [INFO][5604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--88cd9787c--mzmcn-eth0", GenerateName:"whisker-88cd9787c-", Namespace:"calico-system", SelfLink:"", UID:"59bffcb1-b602-427e-b59e-3a71a71f1f12", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"88cd9787c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-88cd9787c-mzmcn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96c4880026e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:23.366794 containerd[1475]: 2026-03-12 01:25:23.329 [INFO][5604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" Mar 12 01:25:23.366794 containerd[1475]: 2026-03-12 01:25:23.329 [INFO][5604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96c4880026e ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" Mar 12 01:25:23.366794 containerd[1475]: 2026-03-12 01:25:23.344 [INFO][5604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" Mar 12 01:25:23.366794 containerd[1475]: 2026-03-12 01:25:23.345 [INFO][5604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--88cd9787c--mzmcn-eth0", GenerateName:"whisker-88cd9787c-", Namespace:"calico-system", SelfLink:"", UID:"59bffcb1-b602-427e-b59e-3a71a71f1f12", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 25, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"88cd9787c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509", Pod:"whisker-88cd9787c-mzmcn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96c4880026e", MAC:"1e:ed:de:31:10:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:25:23.366794 containerd[1475]: 2026-03-12 01:25:23.359 [INFO][5604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509" Namespace="calico-system" Pod="whisker-88cd9787c-mzmcn" WorkloadEndpoint="localhost-k8s-whisker--88cd9787c--mzmcn-eth0" Mar 12 01:25:23.425478 containerd[1475]: time="2026-03-12T01:25:23.425135416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:25:23.425478 containerd[1475]: time="2026-03-12T01:25:23.425382646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:25:23.425478 containerd[1475]: time="2026-03-12T01:25:23.425406770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:23.425753 containerd[1475]: time="2026-03-12T01:25:23.425565518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:25:23.484547 systemd[1]: Started cri-containerd-a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509.scope - libcontainer container a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509. Mar 12 01:25:23.513948 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:25:23.557641 containerd[1475]: time="2026-03-12T01:25:23.557588916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88cd9787c-mzmcn,Uid:59bffcb1-b602-427e-b59e-3a71a71f1f12,Namespace:calico-system,Attempt:0,} returns sandbox id \"a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509\"" Mar 12 01:25:23.566667 containerd[1475]: time="2026-03-12T01:25:23.566485329Z" level=info msg="CreateContainer within sandbox \"a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:25:23.590832 containerd[1475]: time="2026-03-12T01:25:23.590641351Z" level=info msg="CreateContainer within sandbox \"a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fe621d49fb9a4d0cf7abed1c11acf0f40ae7f90f9160ba817746f5a85b9e7fc4\"" Mar 12 01:25:23.593381 containerd[1475]: time="2026-03-12T01:25:23.592389042Z" level=info msg="StartContainer for \"fe621d49fb9a4d0cf7abed1c11acf0f40ae7f90f9160ba817746f5a85b9e7fc4\"" Mar 12 01:25:23.656701 systemd[1]: Started cri-containerd-fe621d49fb9a4d0cf7abed1c11acf0f40ae7f90f9160ba817746f5a85b9e7fc4.scope - libcontainer container fe621d49fb9a4d0cf7abed1c11acf0f40ae7f90f9160ba817746f5a85b9e7fc4. Mar 12 01:25:23.768035 containerd[1475]: time="2026-03-12T01:25:23.767917858Z" level=info msg="StartContainer for \"fe621d49fb9a4d0cf7abed1c11acf0f40ae7f90f9160ba817746f5a85b9e7fc4\" returns successfully" Mar 12 01:25:23.775492 containerd[1475]: time="2026-03-12T01:25:23.775342558Z" level=info msg="CreateContainer within sandbox \"a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:25:23.800303 containerd[1475]: time="2026-03-12T01:25:23.800130959Z" level=info msg="CreateContainer within sandbox \"a19811c8010b55662c99e2f2caa989b02ade543735a0c59b68641e49cc59e509\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2b4b20a3316bf0368051902d63bb1d7f44641ff5608e211c258cee57b33b3e75\"" Mar 12 01:25:23.801755 containerd[1475]: time="2026-03-12T01:25:23.801658810Z" level=info msg="StartContainer for \"2b4b20a3316bf0368051902d63bb1d7f44641ff5608e211c258cee57b33b3e75\"" Mar 12 01:25:23.850468 systemd[1]: Started cri-containerd-2b4b20a3316bf0368051902d63bb1d7f44641ff5608e211c258cee57b33b3e75.scope - libcontainer container 2b4b20a3316bf0368051902d63bb1d7f44641ff5608e211c258cee57b33b3e75. Mar 12 01:25:23.932775 containerd[1475]: time="2026-03-12T01:25:23.932654070Z" level=info msg="StartContainer for \"2b4b20a3316bf0368051902d63bb1d7f44641ff5608e211c258cee57b33b3e75\" returns successfully" Mar 12 01:25:24.396630 systemd-networkd[1408]: cali96c4880026e: Gained IPv6LL Mar 12 01:25:24.417513 kubelet[2546]: I0312 01:25:24.417406 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-88cd9787c-mzmcn" podStartSLOduration=2.417383638 podStartE2EDuration="2.417383638s" podCreationTimestamp="2026-03-12 01:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:25:24.417198811 +0000 UTC m=+82.031561691" watchObservedRunningTime="2026-03-12 01:25:24.417383638 +0000 UTC m=+82.031746537" Mar 12 01:25:24.418126 kubelet[2546]: I0312 01:25:24.417923 2546 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s5nns" podStartSLOduration=47.967204899 podStartE2EDuration="55.41791178s" podCreationTimestamp="2026-03-12 01:24:29 +0000 UTC" firstStartedPulling="2026-03-12 01:25:14.846228925 +0000 UTC m=+72.460591824" lastFinishedPulling="2026-03-12 01:25:22.296935825 +0000 UTC m=+79.911298705" observedRunningTime="2026-03-12 01:25:23.407642973 +0000 UTC m=+81.022005852" watchObservedRunningTime="2026-03-12 01:25:24.41791178 +0000 UTC m=+82.032274679" Mar 12 01:25:26.751942 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:51120.service - OpenSSH per-connection server daemon (10.0.0.1:51120). Mar 12 01:25:27.272039 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 51120 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:27.385613 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:27.539198 systemd-logind[1453]: New session 10 of user core. Mar 12 01:25:27.600675 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:25:28.361630 sshd[5773]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:28.384924 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:51120.service: Deactivated successfully. Mar 12 01:25:28.390630 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:25:28.392723 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:25:28.395031 systemd-logind[1453]: Removed session 10. Mar 12 01:25:28.631420 kubelet[2546]: E0312 01:25:28.611077 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:33.445372 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:48496.service - OpenSSH per-connection server daemon (10.0.0.1:48496). Mar 12 01:25:33.623394 kubelet[2546]: E0312 01:25:33.622938 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:33.837934 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 48496 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:33.845967 sshd[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:33.878424 systemd-logind[1453]: New session 11 of user core. Mar 12 01:25:33.896439 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:25:34.601733 sshd[5799]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:34.613719 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:48496.service: Deactivated successfully. Mar 12 01:25:34.617666 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:25:34.621971 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:25:34.624939 systemd-logind[1453]: Removed session 11. Mar 12 01:25:39.641898 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:48504.service - OpenSSH per-connection server daemon (10.0.0.1:48504). Mar 12 01:25:39.746494 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 48504 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:39.749070 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:39.758462 systemd-logind[1453]: New session 12 of user core. Mar 12 01:25:39.766656 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:25:39.997625 sshd[5857]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:40.005110 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:48504.service: Deactivated successfully. Mar 12 01:25:40.008835 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:25:40.011109 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:25:40.013452 systemd-logind[1453]: Removed session 12. Mar 12 01:25:41.244566 systemd[1]: run-containerd-runc-k8s.io-d0bfbd41660e3928e0ba791bb24b3c4cc9e0e76cd7b8cb5c0fee164414d32592-runc.26NeWZ.mount: Deactivated successfully. Mar 12 01:25:41.548675 kubelet[2546]: E0312 01:25:41.548566 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:25:45.024827 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). Mar 12 01:25:45.185638 sshd[5941]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:45.194648 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:45.224815 systemd-logind[1453]: New session 13 of user core. Mar 12 01:25:45.238386 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:25:45.653000 sshd[5941]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:45.673547 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:37280.service: Deactivated successfully. Mar 12 01:25:45.679761 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:25:45.687771 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:25:45.714662 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Mar 12 01:25:45.718384 systemd-logind[1453]: Removed session 13. Mar 12 01:25:45.787893 sshd[5976]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:45.793229 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:45.808158 systemd-logind[1453]: New session 14 of user core. Mar 12 01:25:45.821055 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:25:46.133839 sshd[5976]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:46.147062 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:37294.service: Deactivated successfully. Mar 12 01:25:46.150796 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:25:46.154965 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:25:46.169244 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:37304.service - OpenSSH per-connection server daemon (10.0.0.1:37304). Mar 12 01:25:46.171879 systemd-logind[1453]: Removed session 14. Mar 12 01:25:46.219800 sshd[5988]: Accepted publickey for core from 10.0.0.1 port 37304 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:46.223323 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:46.234656 systemd-logind[1453]: New session 15 of user core. Mar 12 01:25:46.244942 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:25:46.421630 sshd[5988]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:46.427339 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:37304.service: Deactivated successfully. Mar 12 01:25:46.432453 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:25:46.434994 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:25:46.437409 systemd-logind[1453]: Removed session 15. Mar 12 01:25:49.426562 kubelet[2546]: I0312 01:25:49.425680 2546 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:25:51.491687 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:37316.service - OpenSSH per-connection server daemon (10.0.0.1:37316). Mar 12 01:25:51.572305 sshd[6016]: Accepted publickey for core from 10.0.0.1 port 37316 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:51.576326 sshd[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:51.587007 systemd-logind[1453]: New session 16 of user core. Mar 12 01:25:51.611506 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:25:51.825731 sshd[6016]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:51.869322 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:37316.service: Deactivated successfully. Mar 12 01:25:51.875428 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:25:51.879330 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:25:51.888812 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:37332.service - OpenSSH per-connection server daemon (10.0.0.1:37332). Mar 12 01:25:51.890657 systemd-logind[1453]: Removed session 16. Mar 12 01:25:51.975726 sshd[6030]: Accepted publickey for core from 10.0.0.1 port 37332 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:51.981369 sshd[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:51.995603 systemd-logind[1453]: New session 17 of user core. Mar 12 01:25:52.004622 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:25:52.531444 sshd[6030]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:52.550621 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:37332.service: Deactivated successfully. Mar 12 01:25:52.556723 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:25:52.559845 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:25:52.569985 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:46992.service - OpenSSH per-connection server daemon (10.0.0.1:46992). Mar 12 01:25:52.571858 systemd-logind[1453]: Removed session 17. Mar 12 01:25:52.677232 sshd[6042]: Accepted publickey for core from 10.0.0.1 port 46992 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:52.679963 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:52.689247 systemd-logind[1453]: New session 18 of user core. Mar 12 01:25:52.698603 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:25:54.020881 sshd[6042]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:54.041968 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:46992.service: Deactivated successfully. Mar 12 01:25:54.044175 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:25:54.045544 systemd[1]: session-18.scope: Consumed 1.087s CPU time. Mar 12 01:25:54.050608 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:25:54.066062 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:47002.service - OpenSSH per-connection server daemon (10.0.0.1:47002). Mar 12 01:25:54.069447 systemd-logind[1453]: Removed session 18. Mar 12 01:25:54.112457 sshd[6072]: Accepted publickey for core from 10.0.0.1 port 47002 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:54.115422 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:54.122601 systemd-logind[1453]: New session 19 of user core. Mar 12 01:25:54.136567 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:25:55.001166 sshd[6072]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:55.012418 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:47002.service: Deactivated successfully. Mar 12 01:25:55.017020 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:25:55.021190 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:25:55.032791 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:47016.service - OpenSSH per-connection server daemon (10.0.0.1:47016). Mar 12 01:25:55.036004 systemd-logind[1453]: Removed session 19. Mar 12 01:25:55.208468 sshd[6087]: Accepted publickey for core from 10.0.0.1 port 47016 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:25:55.215213 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:25:55.225217 systemd-logind[1453]: New session 20 of user core. Mar 12 01:25:55.234660 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:25:55.538169 sshd[6087]: pam_unix(sshd:session): session closed for user core Mar 12 01:25:55.550500 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:47016.service: Deactivated successfully. Mar 12 01:25:55.557467 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:25:55.559090 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:25:55.562490 systemd-logind[1453]: Removed session 20. Mar 12 01:26:00.566398 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:47022.service - OpenSSH per-connection server daemon (10.0.0.1:47022). Mar 12 01:26:00.645062 sshd[6102]: Accepted publickey for core from 10.0.0.1 port 47022 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:26:00.654903 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:26:00.688979 systemd-logind[1453]: New session 21 of user core. Mar 12 01:26:00.695980 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:26:01.084369 sshd[6102]: pam_unix(sshd:session): session closed for user core Mar 12 01:26:01.097195 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:47022.service: Deactivated successfully. Mar 12 01:26:01.108045 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:26:01.113471 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:26:01.120485 systemd-logind[1453]: Removed session 21. Mar 12 01:26:04.253771 kubelet[2546]: I0312 01:26:04.253490 2546 scope.go:117] "RemoveContainer" containerID="fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874" Mar 12 01:26:04.285003 containerd[1475]: time="2026-03-12T01:26:04.284782063Z" level=info msg="RemoveContainer for \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\"" Mar 12 01:26:04.392895 containerd[1475]: time="2026-03-12T01:26:04.389905978Z" level=info msg="RemoveContainer for \"fbcff8d6c1f13dc3c7a66e398659af19ec77248059b70d05df3270bd53b95874\" returns successfully" Mar 12 01:26:04.396988 kubelet[2546]: I0312 01:26:04.396896 2546 scope.go:117] "RemoveContainer" containerID="b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1" Mar 12 01:26:04.401905 containerd[1475]: time="2026-03-12T01:26:04.401877960Z" level=info msg="RemoveContainer for \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\"" Mar 12 01:26:04.414878 containerd[1475]: time="2026-03-12T01:26:04.414778089Z" level=info msg="RemoveContainer for \"b58bac7b7a9aacaa17ae3a1b0ac81bdb5285ba33cf58dff4917143157cf641d1\" returns successfully" Mar 12 01:26:04.420412 containerd[1475]: time="2026-03-12T01:26:04.420369965Z" level=info msg="StopPodSandbox for \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\"" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.535 [WARNING][6156] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qkd8c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ce9a7bc7-7141-49c7-bada-00125a5134ff", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a", Pod:"coredns-66bc5c9577-qkd8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d461b4b25c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.536 [INFO][6156] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.536 [INFO][6156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" iface="eth0" netns="" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.536 [INFO][6156] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.536 [INFO][6156] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.699 [INFO][6166] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.700 [INFO][6166] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.700 [INFO][6166] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.882 [WARNING][6166] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.885 [INFO][6166] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.903 [INFO][6166] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:04.918180 containerd[1475]: 2026-03-12 01:26:04.911 [INFO][6156] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:04.918180 containerd[1475]: time="2026-03-12T01:26:04.917935451Z" level=info msg="TearDown network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\" successfully" Mar 12 01:26:04.918180 containerd[1475]: time="2026-03-12T01:26:04.917978471Z" level=info msg="StopPodSandbox for \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\" returns successfully" Mar 12 01:26:04.921936 containerd[1475]: time="2026-03-12T01:26:04.921600127Z" level=info msg="RemovePodSandbox for \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\"" Mar 12 01:26:04.925175 containerd[1475]: time="2026-03-12T01:26:04.924980003Z" level=info msg="Forcibly stopping sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\"" Mar 12 01:26:04.947770 systemd[1]: run-containerd-runc-k8s.io-f7932bac2266facf24d46161b16bdb9c958866314c1a517ea223e92c9b007c89-runc.v4BXdr.mount: Deactivated successfully. Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.121 [WARNING][6195] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qkd8c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ce9a7bc7-7141-49c7-bada-00125a5134ff", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f292692e5e9020095dc82e0f200678c6f991b9de5c9c015ac6c950fe3dc587a", Pod:"coredns-66bc5c9577-qkd8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0d461b4b25c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.122 [INFO][6195] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.122 [INFO][6195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" iface="eth0" netns="" Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.122 [INFO][6195] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.122 [INFO][6195] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.225 [INFO][6215] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.226 [INFO][6215] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.226 [INFO][6215] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.236 [WARNING][6215] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.236 [INFO][6215] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" HandleID="k8s-pod-network.8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Workload="localhost-k8s-coredns--66bc5c9577--qkd8c-eth0" Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.241 [INFO][6215] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:05.257677 containerd[1475]: 2026-03-12 01:26:05.250 [INFO][6195] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3" Mar 12 01:26:05.259160 containerd[1475]: time="2026-03-12T01:26:05.257704603Z" level=info msg="TearDown network for sandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\" successfully" Mar 12 01:26:05.345370 containerd[1475]: time="2026-03-12T01:26:05.344454945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:05.345370 containerd[1475]: time="2026-03-12T01:26:05.345066700Z" level=info msg="RemovePodSandbox \"8938123da9536cd2a0151a27092b4b61a082914881bf67b8ac698a30dbe2a1f3\" returns successfully" Mar 12 01:26:05.350025 containerd[1475]: time="2026-03-12T01:26:05.349839067Z" level=info msg="StopPodSandbox for \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\"" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.595 [WARNING][6234] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0", GenerateName:"calico-kube-controllers-84b6b7d8f4-", Namespace:"calico-system", SelfLink:"", UID:"a9774dfd-98ae-4f56-ac01-bd273c8754fb", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b6b7d8f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1", Pod:"calico-kube-controllers-84b6b7d8f4-gct62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calideb95b4acce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.598 [INFO][6234] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.599 [INFO][6234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" iface="eth0" netns="" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.599 [INFO][6234] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.600 [INFO][6234] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.652 [INFO][6243] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.653 [INFO][6243] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.653 [INFO][6243] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.690 [WARNING][6243] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.691 [INFO][6243] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.695 [INFO][6243] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:05.708017 containerd[1475]: 2026-03-12 01:26:05.702 [INFO][6234] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.708017 containerd[1475]: time="2026-03-12T01:26:05.707532289Z" level=info msg="TearDown network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\" successfully" Mar 12 01:26:05.708017 containerd[1475]: time="2026-03-12T01:26:05.707579837Z" level=info msg="StopPodSandbox for \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\" returns successfully" Mar 12 01:26:05.711081 containerd[1475]: time="2026-03-12T01:26:05.709929628Z" level=info msg="RemovePodSandbox for \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\"" Mar 12 01:26:05.711081 containerd[1475]: time="2026-03-12T01:26:05.709981774Z" level=info msg="Forcibly stopping sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\"" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.819 [WARNING][6261] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0", GenerateName:"calico-kube-controllers-84b6b7d8f4-", Namespace:"calico-system", SelfLink:"", UID:"a9774dfd-98ae-4f56-ac01-bd273c8754fb", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84b6b7d8f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69b014dd06e12512fc0354bccd81eb168382cce77d6584b7d026918a5c73f0c1", Pod:"calico-kube-controllers-84b6b7d8f4-gct62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calideb95b4acce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.820 [INFO][6261] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.820 [INFO][6261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" iface="eth0" netns="" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.820 [INFO][6261] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.820 [INFO][6261] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.867 [INFO][6270] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.867 [INFO][6270] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.884 [INFO][6270] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.894 [WARNING][6270] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.894 [INFO][6270] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" HandleID="k8s-pod-network.851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Workload="localhost-k8s-calico--kube--controllers--84b6b7d8f4--gct62-eth0" Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.898 [INFO][6270] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:05.908529 containerd[1475]: 2026-03-12 01:26:05.904 [INFO][6261] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31" Mar 12 01:26:05.908529 containerd[1475]: time="2026-03-12T01:26:05.908343407Z" level=info msg="TearDown network for sandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\" successfully" Mar 12 01:26:05.913968 containerd[1475]: time="2026-03-12T01:26:05.913924498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:05.914734 containerd[1475]: time="2026-03-12T01:26:05.914056410Z" level=info msg="RemovePodSandbox \"851c47b0fc22b07a7d313d51d83c43fae9837517244662cb3bdbe9572bb7eb31\" returns successfully" Mar 12 01:26:05.915319 containerd[1475]: time="2026-03-12T01:26:05.915143490Z" level=info msg="StopPodSandbox for \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\"" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:05.996 [WARNING][6288] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"84316a8e-7adb-4ccf-b643-b729c328a05c", ResourceVersion:"1402", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671", Pod:"calico-apiserver-6cc8b6d44c-rfh26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601e02c4d1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:05.997 [INFO][6288] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:05.997 [INFO][6288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" iface="eth0" netns="" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:05.997 [INFO][6288] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:05.997 [INFO][6288] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:06.046 [INFO][6296] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:06.047 [INFO][6296] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:06.047 [INFO][6296] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:06.086 [WARNING][6296] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:06.086 [INFO][6296] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:06.099 [INFO][6296] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:06.110177 containerd[1475]: 2026-03-12 01:26:06.106 [INFO][6288] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.111582 containerd[1475]: time="2026-03-12T01:26:06.110202961Z" level=info msg="TearDown network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\" successfully" Mar 12 01:26:06.111582 containerd[1475]: time="2026-03-12T01:26:06.110241262Z" level=info msg="StopPodSandbox for \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\" returns successfully" Mar 12 01:26:06.111582 containerd[1475]: time="2026-03-12T01:26:06.111352105Z" level=info msg="RemovePodSandbox for \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\"" Mar 12 01:26:06.111582 containerd[1475]: time="2026-03-12T01:26:06.111396075Z" level=info msg="Forcibly stopping sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\"" Mar 12 01:26:06.113705 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:51050.service - OpenSSH per-connection server daemon (10.0.0.1:51050). Mar 12 01:26:06.197959 sshd[6304]: Accepted publickey for core from 10.0.0.1 port 51050 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:26:06.200952 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:26:06.213563 systemd-logind[1453]: New session 22 of user core. Mar 12 01:26:06.215583 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.191 [WARNING][6315] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"84316a8e-7adb-4ccf-b643-b729c328a05c", ResourceVersion:"1402", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e34d3facfb1e70d452eab48ef7b0ef715f87b1be6cb34d8db1f985dae7c6671", Pod:"calico-apiserver-6cc8b6d44c-rfh26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali601e02c4d1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.192 [INFO][6315] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.192 [INFO][6315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" iface="eth0" netns="" Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.192 [INFO][6315] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.192 [INFO][6315] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.233 [INFO][6324] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.234 [INFO][6324] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.234 [INFO][6324] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.244 [WARNING][6324] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.244 [INFO][6324] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" HandleID="k8s-pod-network.5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--rfh26-eth0" Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.247 [INFO][6324] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:06.255082 containerd[1475]: 2026-03-12 01:26:06.251 [INFO][6315] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4" Mar 12 01:26:06.255082 containerd[1475]: time="2026-03-12T01:26:06.255074554Z" level=info msg="TearDown network for sandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\" successfully" Mar 12 01:26:06.266706 containerd[1475]: time="2026-03-12T01:26:06.265759415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:06.266706 containerd[1475]: time="2026-03-12T01:26:06.265844512Z" level=info msg="RemovePodSandbox \"5a3a449ef7ff64de466d1cb4819acf42eceeff2e83d560318952763938a7c5b4\" returns successfully" Mar 12 01:26:06.266706 containerd[1475]: time="2026-03-12T01:26:06.266443638Z" level=info msg="StopPodSandbox for \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\"" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.344 [WARNING][6344] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5nns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ed55f459-74a9-4d53-811f-2b6098967bb3", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3", Pod:"csi-node-driver-s5nns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdd79494091", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.344 [INFO][6344] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.345 [INFO][6344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" iface="eth0" netns="" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.345 [INFO][6344] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.345 [INFO][6344] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.399 [INFO][6357] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.399 [INFO][6357] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.399 [INFO][6357] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.407 [WARNING][6357] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.407 [INFO][6357] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.410 [INFO][6357] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:06.419724 containerd[1475]: 2026-03-12 01:26:06.413 [INFO][6344] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.419724 containerd[1475]: time="2026-03-12T01:26:06.419503435Z" level=info msg="TearDown network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\" successfully" Mar 12 01:26:06.419724 containerd[1475]: time="2026-03-12T01:26:06.419558015Z" level=info msg="StopPodSandbox for \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\" returns successfully" Mar 12 01:26:06.423176 containerd[1475]: time="2026-03-12T01:26:06.422232471Z" level=info msg="RemovePodSandbox for \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\"" Mar 12 01:26:06.423176 containerd[1475]: time="2026-03-12T01:26:06.422633708Z" level=info msg="Forcibly stopping sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\"" Mar 12 01:26:06.553640 sshd[6304]: pam_unix(sshd:session): session closed for user core Mar 12 01:26:06.557862 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:51050.service: Deactivated successfully. Mar 12 01:26:06.564072 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 01:26:06.567043 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Mar 12 01:26:06.569630 systemd-logind[1453]: Removed session 22. Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.504 [WARNING][6376] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5nns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ed55f459-74a9-4d53-811f-2b6098967bb3", ResourceVersion:"1256", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57639f4f0848c69073db1376bbac7c9324ef32b608d62246e9802fd39efdb5b3", Pod:"csi-node-driver-s5nns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdd79494091", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.504 [INFO][6376] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.504 [INFO][6376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" iface="eth0" netns="" Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.505 [INFO][6376] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.505 [INFO][6376] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.550 [INFO][6385] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.551 [INFO][6385] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.551 [INFO][6385] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.558 [WARNING][6385] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.559 [INFO][6385] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" HandleID="k8s-pod-network.0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Workload="localhost-k8s-csi--node--driver--s5nns-eth0" Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.564 [INFO][6385] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:06.571901 containerd[1475]: 2026-03-12 01:26:06.568 [INFO][6376] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22" Mar 12 01:26:06.572592 containerd[1475]: time="2026-03-12T01:26:06.571957745Z" level=info msg="TearDown network for sandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\" successfully" Mar 12 01:26:06.577983 containerd[1475]: time="2026-03-12T01:26:06.577922025Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:06.578093 containerd[1475]: time="2026-03-12T01:26:06.578038399Z" level=info msg="RemovePodSandbox \"0e34ed73bca14164c236320da70bf8f6632d679595f64e2cadd0079034064d22\" returns successfully" Mar 12 01:26:06.579120 containerd[1475]: time="2026-03-12T01:26:06.579062342Z" level=info msg="StopPodSandbox for \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\"" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.644 [WARNING][6405] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xx7s9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86acc654-d8de-47c3-aef7-67c1f1950085", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0", Pod:"coredns-66bc5c9577-xx7s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4525debb3a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.645 [INFO][6405] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.645 [INFO][6405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" iface="eth0" netns="" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.645 [INFO][6405] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.645 [INFO][6405] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.686 [INFO][6414] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.687 [INFO][6414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.687 [INFO][6414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.697 [WARNING][6414] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.697 [INFO][6414] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.700 [INFO][6414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:06.708054 containerd[1475]: 2026-03-12 01:26:06.704 [INFO][6405] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.708054 containerd[1475]: time="2026-03-12T01:26:06.708023159Z" level=info msg="TearDown network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\" successfully" Mar 12 01:26:06.708054 containerd[1475]: time="2026-03-12T01:26:06.708056461Z" level=info msg="StopPodSandbox for \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\" returns successfully" Mar 12 01:26:06.709574 containerd[1475]: time="2026-03-12T01:26:06.709420799Z" level=info msg="RemovePodSandbox for \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\"" Mar 12 01:26:06.709574 containerd[1475]: time="2026-03-12T01:26:06.709558221Z" level=info msg="Forcibly stopping sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\"" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.786 [WARNING][6432] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xx7s9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86acc654-d8de-47c3-aef7-67c1f1950085", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17bc1d7753f11978d63319fdaa77fb802acce01301b6752bd3ea8d57aa2cece0", Pod:"coredns-66bc5c9577-xx7s9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4525debb3a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.786 [INFO][6432] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.786 [INFO][6432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" iface="eth0" netns="" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.786 [INFO][6432] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.786 [INFO][6432] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.827 [INFO][6440] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.827 [INFO][6440] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.827 [INFO][6440] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.835 [WARNING][6440] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.835 [INFO][6440] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" HandleID="k8s-pod-network.9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Workload="localhost-k8s-coredns--66bc5c9577--xx7s9-eth0" Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.838 [INFO][6440] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:06.847804 containerd[1475]: 2026-03-12 01:26:06.842 [INFO][6432] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa" Mar 12 01:26:06.847804 containerd[1475]: time="2026-03-12T01:26:06.847758247Z" level=info msg="TearDown network for sandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\" successfully" Mar 12 01:26:06.854579 containerd[1475]: time="2026-03-12T01:26:06.854515969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:06.854670 containerd[1475]: time="2026-03-12T01:26:06.854622034Z" level=info msg="RemovePodSandbox \"9cdd47d1eb11e5b111161f683dd6465af5030b6eb4b64fc47ad9a4ea3ce5ffaa\" returns successfully" Mar 12 01:26:06.855793 containerd[1475]: time="2026-03-12T01:26:06.855709032Z" level=info msg="StopPodSandbox for \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\"" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.920 [WARNING][6457] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"43f18370-3b65-47ef-902a-559a0936b656", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c", Pod:"calico-apiserver-6cc8b6d44c-pwhfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib0c15b02321", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.921 [INFO][6457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.921 [INFO][6457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" iface="eth0" netns="" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.921 [INFO][6457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.921 [INFO][6457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.970 [INFO][6465] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.971 [INFO][6465] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.971 [INFO][6465] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.982 [WARNING][6465] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.982 [INFO][6465] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.985 [INFO][6465] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:06.994812 containerd[1475]: 2026-03-12 01:26:06.990 [INFO][6457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:06.994812 containerd[1475]: time="2026-03-12T01:26:06.994654303Z" level=info msg="TearDown network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\" successfully" Mar 12 01:26:06.994812 containerd[1475]: time="2026-03-12T01:26:06.994696440Z" level=info msg="StopPodSandbox for \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\" returns successfully" Mar 12 01:26:06.995954 containerd[1475]: time="2026-03-12T01:26:06.995883292Z" level=info msg="RemovePodSandbox for \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\"" Mar 12 01:26:06.996161 containerd[1475]: time="2026-03-12T01:26:06.995967927Z" level=info msg="Forcibly stopping sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\"" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.068 [WARNING][6484] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0", GenerateName:"calico-apiserver-6cc8b6d44c-", Namespace:"calico-system", SelfLink:"", UID:"43f18370-3b65-47ef-902a-559a0936b656", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8b6d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789cb01d60e2c0d71d292948906947a76762d66ad8e6376108fb0fea732ebd2c", Pod:"calico-apiserver-6cc8b6d44c-pwhfw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib0c15b02321", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.068 [INFO][6484] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.068 [INFO][6484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" iface="eth0" netns="" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.068 [INFO][6484] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.068 [INFO][6484] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.129 [INFO][6492] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.129 [INFO][6492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.129 [INFO][6492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.138 [WARNING][6492] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.139 [INFO][6492] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" HandleID="k8s-pod-network.e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Workload="localhost-k8s-calico--apiserver--6cc8b6d44c--pwhfw-eth0" Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.142 [INFO][6492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:07.150929 containerd[1475]: 2026-03-12 01:26:07.146 [INFO][6484] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4" Mar 12 01:26:07.150929 containerd[1475]: time="2026-03-12T01:26:07.150857602Z" level=info msg="TearDown network for sandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\" successfully" Mar 12 01:26:07.157850 containerd[1475]: time="2026-03-12T01:26:07.157624922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:07.157850 containerd[1475]: time="2026-03-12T01:26:07.157780538Z" level=info msg="RemovePodSandbox \"e4b06018849cbae3711cbec0d0306c846af31eb3298239805f8b1f71c5cf17f4\" returns successfully" Mar 12 01:26:07.158817 containerd[1475]: time="2026-03-12T01:26:07.158690932Z" level=info msg="StopPodSandbox for \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\"" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.239 [WARNING][6509] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.239 [INFO][6509] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.239 [INFO][6509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" iface="eth0" netns="" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.239 [INFO][6509] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.239 [INFO][6509] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.308 [INFO][6517] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.309 [INFO][6517] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.309 [INFO][6517] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.325 [WARNING][6517] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.325 [INFO][6517] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.329 [INFO][6517] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:07.337897 containerd[1475]: 2026-03-12 01:26:07.334 [INFO][6509] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.338665 containerd[1475]: time="2026-03-12T01:26:07.337904456Z" level=info msg="TearDown network for sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" successfully" Mar 12 01:26:07.338665 containerd[1475]: time="2026-03-12T01:26:07.337954739Z" level=info msg="StopPodSandbox for \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" returns successfully" Mar 12 01:26:07.339211 containerd[1475]: time="2026-03-12T01:26:07.339117653Z" level=info msg="RemovePodSandbox for \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\"" Mar 12 01:26:07.339211 containerd[1475]: time="2026-03-12T01:26:07.339210554Z" level=info msg="Forcibly stopping sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\"" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.408 [WARNING][6535] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.408 [INFO][6535] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.408 [INFO][6535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" iface="eth0" netns="" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.408 [INFO][6535] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.408 [INFO][6535] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.454 [INFO][6544] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.454 [INFO][6544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.454 [INFO][6544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.472 [WARNING][6544] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.472 [INFO][6544] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" HandleID="k8s-pod-network.90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.476 [INFO][6544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:07.483912 containerd[1475]: 2026-03-12 01:26:07.480 [INFO][6535] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680" Mar 12 01:26:07.484843 containerd[1475]: time="2026-03-12T01:26:07.483960321Z" level=info msg="TearDown network for sandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" successfully" Mar 12 01:26:07.492412 containerd[1475]: time="2026-03-12T01:26:07.492226177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:07.492412 containerd[1475]: time="2026-03-12T01:26:07.492417138Z" level=info msg="RemovePodSandbox \"90d356c01992d19973bbd243c4eee0f25ba6f50fa4b29cc89841f5fb31ca7680\" returns successfully" Mar 12 01:26:07.494080 containerd[1475]: time="2026-03-12T01:26:07.493560749Z" level=info msg="StopPodSandbox for \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\"" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.582 [WARNING][6562] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.583 [INFO][6562] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.583 [INFO][6562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" iface="eth0" netns="" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.583 [INFO][6562] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.583 [INFO][6562] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.618 [INFO][6570] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.619 [INFO][6570] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.619 [INFO][6570] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.628 [WARNING][6570] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.629 [INFO][6570] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.631 [INFO][6570] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:07.638737 containerd[1475]: 2026-03-12 01:26:07.635 [INFO][6562] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.638737 containerd[1475]: time="2026-03-12T01:26:07.638704762Z" level=info msg="TearDown network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" successfully" Mar 12 01:26:07.639534 containerd[1475]: time="2026-03-12T01:26:07.638744185Z" level=info msg="StopPodSandbox for \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" returns successfully" Mar 12 01:26:07.640131 containerd[1475]: time="2026-03-12T01:26:07.640029912Z" level=info msg="RemovePodSandbox for \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\"" Mar 12 01:26:07.640131 containerd[1475]: time="2026-03-12T01:26:07.640100782Z" level=info msg="Forcibly stopping sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\"" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.708 [WARNING][6588] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" WorkloadEndpoint="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.708 [INFO][6588] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.708 [INFO][6588] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" iface="eth0" netns="" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.708 [INFO][6588] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.708 [INFO][6588] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.752 [INFO][6596] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.753 [INFO][6596] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.753 [INFO][6596] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.765 [WARNING][6596] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.766 [INFO][6596] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" HandleID="k8s-pod-network.916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Workload="localhost-k8s-whisker--7b87b6b94d--6dx4w-eth0" Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.774 [INFO][6596] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:07.782156 containerd[1475]: 2026-03-12 01:26:07.778 [INFO][6588] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c" Mar 12 01:26:07.783227 containerd[1475]: time="2026-03-12T01:26:07.782405872Z" level=info msg="TearDown network for sandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" successfully" Mar 12 01:26:07.793676 containerd[1475]: time="2026-03-12T01:26:07.793493468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:07.793676 containerd[1475]: time="2026-03-12T01:26:07.793626944Z" level=info msg="RemovePodSandbox \"916f254a427a4a8fa62641787ec5cf316b21583423eebd1509f5513800aef67c\" returns successfully" Mar 12 01:26:07.794750 containerd[1475]: time="2026-03-12T01:26:07.794650725Z" level=info msg="StopPodSandbox for \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\"" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.879 [WARNING][6613] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8eb3dd39-a138-4acf-8d82-63348c5ba938", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c", Pod:"goldmane-cccfbd5cf-wqn8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45a42650f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.879 [INFO][6613] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.879 [INFO][6613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" iface="eth0" netns="" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.879 [INFO][6613] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.879 [INFO][6613] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.915 [INFO][6621] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.915 [INFO][6621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.915 [INFO][6621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.925 [WARNING][6621] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.925 [INFO][6621] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.928 [INFO][6621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:07.937067 containerd[1475]: 2026-03-12 01:26:07.933 [INFO][6613] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:07.937067 containerd[1475]: time="2026-03-12T01:26:07.936998052Z" level=info msg="TearDown network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\" successfully" Mar 12 01:26:07.937067 containerd[1475]: time="2026-03-12T01:26:07.937038988Z" level=info msg="StopPodSandbox for \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\" returns successfully" Mar 12 01:26:07.938058 containerd[1475]: time="2026-03-12T01:26:07.937995524Z" level=info msg="RemovePodSandbox for \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\"" Mar 12 01:26:07.938130 containerd[1475]: time="2026-03-12T01:26:07.938075091Z" level=info msg="Forcibly stopping sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\"" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.005 [WARNING][6638] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"8eb3dd39-a138-4acf-8d82-63348c5ba938", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96e35292690ec85a34e359b96dbbafc7385b14f9c2736fb1222913397942654c", Pod:"goldmane-cccfbd5cf-wqn8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib45a42650f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.005 [INFO][6638] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.005 [INFO][6638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" iface="eth0" netns="" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.005 [INFO][6638] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.005 [INFO][6638] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.051 [INFO][6647] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.052 [INFO][6647] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.052 [INFO][6647] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.060 [WARNING][6647] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.060 [INFO][6647] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" HandleID="k8s-pod-network.a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Workload="localhost-k8s-goldmane--cccfbd5cf--wqn8p-eth0" Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.064 [INFO][6647] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:26:08.079489 containerd[1475]: 2026-03-12 01:26:08.069 [INFO][6638] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f" Mar 12 01:26:08.080003 containerd[1475]: time="2026-03-12T01:26:08.079527299Z" level=info msg="TearDown network for sandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\" successfully" Mar 12 01:26:08.086207 containerd[1475]: time="2026-03-12T01:26:08.086032286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:26:08.086207 containerd[1475]: time="2026-03-12T01:26:08.086127041Z" level=info msg="RemovePodSandbox \"a01668bb163ad8e66c39658b08367014ff689425e4984545fecac36903e5857f\" returns successfully" Mar 12 01:26:11.568532 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:51060.service - OpenSSH per-connection server daemon (10.0.0.1:51060). Mar 12 01:26:11.778206 sshd[6677]: Accepted publickey for core from 10.0.0.1 port 51060 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:26:14.717015 sshd[6677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:26:14.808093 systemd-logind[1453]: New session 23 of user core. Mar 12 01:26:15.909153 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 01:26:16.485335 sshd[6677]: pam_unix(sshd:session): session closed for user core Mar 12 01:26:16.494687 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:51060.service: Deactivated successfully. Mar 12 01:26:16.498917 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 01:26:16.502436 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Mar 12 01:26:16.504882 systemd-logind[1453]: Removed session 23. Mar 12 01:26:16.549788 kubelet[2546]: E0312 01:26:16.549582 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"