Mar 4 01:08:32.834870 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 01:08:32.835006 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:08:32.835019 kernel: BIOS-provided physical RAM map: Mar 4 01:08:32.835025 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 4 01:08:32.835031 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 4 01:08:32.835036 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 4 01:08:32.835043 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 4 01:08:32.835049 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 4 01:08:32.835055 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 4 01:08:32.835060 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 4 01:08:32.835069 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 4 01:08:32.835075 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 4 01:08:32.835120 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 4 01:08:32.835127 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 4 01:08:32.835158 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 4 01:08:32.835165 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 4 01:08:32.835175 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 4 01:08:32.835181 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 4 01:08:32.835187 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 4 01:08:32.835193 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 01:08:32.835199 kernel: NX (Execute Disable) protection: active Mar 4 01:08:32.835205 kernel: APIC: Static calls initialized Mar 4 01:08:32.835211 kernel: efi: EFI v2.7 by EDK II Mar 4 01:08:32.835218 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 4 01:08:32.835224 kernel: SMBIOS 2.8 present. Mar 4 01:08:32.835230 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 4 01:08:32.835236 kernel: Hypervisor detected: KVM Mar 4 01:08:32.835245 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 01:08:32.835251 kernel: kvm-clock: using sched offset of 9991646349 cycles Mar 4 01:08:32.835260 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 01:08:32.835272 kernel: tsc: Detected 2445.424 MHz processor Mar 4 01:08:32.835284 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 01:08:32.835295 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 01:08:32.835304 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 4 01:08:32.835316 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 4 01:08:32.835326 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 01:08:32.835344 kernel: Using GB pages for direct mapping Mar 4 01:08:32.835356 kernel: Secure boot disabled Mar 4 01:08:32.835463 kernel: ACPI: Early table checksum verification disabled Mar 4 01:08:32.835470 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 4 01:08:32.835483 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 4 01:08:32.835490 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:08:32.835497 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:08:32.835506 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 4 01:08:32.835547 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:08:32.835555 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:08:32.835561 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:08:32.835568 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:08:32.835575 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 4 01:08:32.835582 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 4 01:08:32.835592 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 4 01:08:32.835598 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 4 01:08:32.835605 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 4 01:08:32.835612 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 4 01:08:32.835618 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 4 01:08:32.835625 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 4 01:08:32.835631 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 4 01:08:32.835638 kernel: No NUMA configuration found Mar 4 01:08:32.835667 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 4 01:08:32.835677 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 4 01:08:32.835684 kernel: Zone ranges: Mar 4 01:08:32.835691 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 01:08:32.835698 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 4 01:08:32.835704 kernel: Normal empty Mar 4 01:08:32.835711 kernel: Movable zone start for each node Mar 4 01:08:32.835717 kernel: Early memory node ranges Mar 4 01:08:32.835724 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 4 01:08:32.835730 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 4 01:08:32.835736 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 4 01:08:32.835746 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 4 01:08:32.835756 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 4 01:08:32.835768 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 4 01:08:32.835814 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 4 01:08:32.835828 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:08:32.835839 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 4 01:08:32.835851 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 4 01:08:32.835863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:08:32.835874 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 4 01:08:32.835891 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 4 01:08:32.835901 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 4 01:08:32.835912 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 01:08:32.835922 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 01:08:32.835933 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 01:08:32.835943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 01:08:32.835953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 01:08:32.835964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 01:08:32.835974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 01:08:32.835988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 01:08:32.835999 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 01:08:32.836010 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 4 01:08:32.836020 kernel: TSC deadline timer available Mar 4 01:08:32.836033 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 4 01:08:32.836045 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 01:08:32.836056 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 4 01:08:32.836073 kernel: kvm-guest: setup PV sched yield Mar 4 01:08:32.836133 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 4 01:08:32.836148 kernel: Booting paravirtualized kernel on KVM Mar 4 01:08:32.836155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 01:08:32.836162 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 4 01:08:32.836168 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 4 01:08:32.836175 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 4 01:08:32.836182 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 4 01:08:32.836189 kernel: kvm-guest: PV spinlocks enabled Mar 4 01:08:32.836195 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 01:08:32.836203 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:08:32.836241 kernel: random: crng init done Mar 4 01:08:32.836248 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 01:08:32.836254 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:08:32.836261 kernel: Fallback order for Node 0: 0 Mar 4 01:08:32.836268 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 4 01:08:32.836274 kernel: Policy zone: DMA32 Mar 4 01:08:32.836281 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:08:32.836288 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 4 01:08:32.836297 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 4 01:08:32.836304 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 01:08:32.836311 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 01:08:32.836317 kernel: Dynamic Preempt: voluntary Mar 4 01:08:32.836324 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:08:32.836341 kernel: rcu: RCU event tracing is enabled. Mar 4 01:08:32.836351 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 4 01:08:32.836413 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:08:32.836421 kernel: Rude variant of Tasks RCU enabled. Mar 4 01:08:32.836428 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:08:32.836435 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:08:32.836442 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 4 01:08:32.836454 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 4 01:08:32.836461 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:08:32.836468 kernel: Console: colour dummy device 80x25 Mar 4 01:08:32.836474 kernel: printk: console [ttyS0] enabled Mar 4 01:08:32.836505 kernel: ACPI: Core revision 20230628 Mar 4 01:08:32.836516 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 4 01:08:32.836523 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 01:08:32.836530 kernel: x2apic enabled Mar 4 01:08:32.836536 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 01:08:32.836544 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 4 01:08:32.836551 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 4 01:08:32.836558 kernel: kvm-guest: setup PV IPIs Mar 4 01:08:32.836565 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 4 01:08:32.836572 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 4 01:08:32.836581 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 4 01:08:32.836588 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 01:08:32.836595 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 4 01:08:32.836602 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 4 01:08:32.836609 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 01:08:32.836616 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 01:08:32.836623 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 01:08:32.836630 kernel: Speculative Store Bypass: Vulnerable Mar 4 01:08:32.836637 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 4 01:08:32.836647 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 4 01:08:32.836654 kernel: active return thunk: srso_alias_return_thunk Mar 4 01:08:32.836661 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 4 01:08:32.836691 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 4 01:08:32.836698 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 4 01:08:32.836706 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 01:08:32.836712 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 01:08:32.836719 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 01:08:32.836729 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 01:08:32.836736 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 4 01:08:32.836744 kernel: Freeing SMP alternatives memory: 32K Mar 4 01:08:32.836750 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:08:32.836757 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:08:32.836764 kernel: landlock: Up and running. Mar 4 01:08:32.836771 kernel: SELinux: Initializing. Mar 4 01:08:32.836778 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:08:32.836785 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:08:32.836795 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 4 01:08:32.836802 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:08:32.836808 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:08:32.836815 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:08:32.836822 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 4 01:08:32.836829 kernel: signal: max sigframe size: 1776 Mar 4 01:08:32.836836 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:08:32.836843 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:08:32.836850 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 01:08:32.836860 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:08:32.836867 kernel: smpboot: x86: Booting SMP configuration: Mar 4 01:08:32.836874 kernel: .... node #0, CPUs: #1 #2 #3 Mar 4 01:08:32.836881 kernel: smp: Brought up 1 node, 4 CPUs Mar 4 01:08:32.836888 kernel: smpboot: Max logical packages: 1 Mar 4 01:08:32.836895 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 4 01:08:32.836902 kernel: devtmpfs: initialized Mar 4 01:08:32.836909 kernel: x86/mm: Memory block size: 128MB Mar 4 01:08:32.836916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 4 01:08:32.836926 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 4 01:08:32.836933 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 4 01:08:32.836940 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 4 01:08:32.836947 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 4 01:08:32.836953 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:08:32.836960 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 4 01:08:32.836967 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:08:32.836974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:08:32.836981 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:08:32.836991 kernel: audit: type=2000 audit(1772586507.047:1): state=initialized audit_enabled=0 res=1 Mar 4 01:08:32.836997 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:08:32.837004 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 01:08:32.837011 kernel: cpuidle: using governor menu Mar 4 01:08:32.837018 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:08:32.837025 kernel: dca service started, version 1.12.1 Mar 4 01:08:32.837032 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 01:08:32.837039 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 01:08:32.837046 kernel: PCI: Using configuration type 1 for base access Mar 4 01:08:32.837056 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 01:08:32.837063 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:08:32.837070 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:08:32.837077 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:08:32.837084 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:08:32.837123 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:08:32.837129 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:08:32.837136 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:08:32.837143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:08:32.837154 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 01:08:32.837161 kernel: ACPI: Interpreter enabled Mar 4 01:08:32.837168 kernel: ACPI: PM: (supports S0 S3 S5) Mar 4 01:08:32.837175 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 01:08:32.837181 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 01:08:32.837189 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 01:08:32.837195 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 01:08:32.837202 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 01:08:32.837889 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 01:08:32.838082 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 4 01:08:32.838291 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 4 01:08:32.838302 kernel: PCI host bridge to bus 0000:00 Mar 4 01:08:32.838764 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 01:08:32.838916 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 01:08:32.839053 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 01:08:32.839245 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 4 01:08:32.839448 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 01:08:32.839588 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 4 01:08:32.839722 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 01:08:32.840043 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 01:08:32.840417 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 4 01:08:32.840616 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 4 01:08:32.840779 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 4 01:08:32.840926 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 4 01:08:32.841070 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 4 01:08:32.841262 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 01:08:32.841529 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 14648 usecs Mar 4 01:08:32.841909 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 4 01:08:32.842066 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 4 01:08:32.842328 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 4 01:08:32.842572 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 4 01:08:32.842931 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 4 01:08:32.843151 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 4 01:08:32.843328 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 4 01:08:32.843551 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 4 01:08:32.843853 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 4 01:08:32.844016 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 4 01:08:32.844211 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 4 01:08:32.844417 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 4 01:08:32.844608 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 4 01:08:32.844849 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 01:08:32.845002 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 01:08:32.845296 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 01:08:32.845513 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 4 01:08:32.845662 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 4 01:08:32.845851 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 01:08:32.845999 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 4 01:08:32.846010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 01:08:32.846017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 01:08:32.846061 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 01:08:32.846072 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 01:08:32.846080 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 01:08:32.846121 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 01:08:32.846129 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 01:08:32.846136 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 01:08:32.846144 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 01:08:32.846151 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 01:08:32.846158 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 01:08:32.846170 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 01:08:32.846177 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 01:08:32.846184 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 01:08:32.846216 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 01:08:32.846224 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 01:08:32.846231 kernel: iommu: Default domain type: Translated Mar 4 01:08:32.846238 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 01:08:32.846244 kernel: efivars: Registered efivars operations Mar 4 01:08:32.846251 kernel: PCI: Using ACPI for IRQ routing Mar 4 01:08:32.846283 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 01:08:32.846294 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 4 01:08:32.846301 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 4 01:08:32.846330 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 4 01:08:32.846338 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 4 01:08:32.846619 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 01:08:32.846784 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 01:08:32.846975 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 01:08:32.846995 kernel: vgaarb: loaded Mar 4 01:08:32.847010 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 4 01:08:32.847017 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 4 01:08:32.847024 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 01:08:32.847031 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:08:32.847039 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:08:32.847046 kernel: pnp: PnP ACPI init Mar 4 01:08:32.847576 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 01:08:32.847592 kernel: pnp: PnP ACPI: found 6 devices Mar 4 01:08:32.847606 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 01:08:32.847613 kernel: NET: Registered PF_INET protocol family Mar 4 01:08:32.847620 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 01:08:32.847628 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 01:08:32.847662 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:08:32.847669 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:08:32.847676 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 01:08:32.847683 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 01:08:32.847711 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:08:32.847744 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:08:32.847752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:08:32.847778 kernel: NET: Registered PF_XDP protocol family Mar 4 01:08:32.848324 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 4 01:08:32.848668 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 4 01:08:32.848960 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 01:08:32.849149 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 01:08:32.849292 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 01:08:32.849705 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 4 01:08:32.849952 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 01:08:32.850148 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 4 01:08:32.850160 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:08:32.850168 kernel: Initialise system trusted keyrings Mar 4 01:08:32.850175 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 01:08:32.850182 kernel: Key type asymmetric registered Mar 4 01:08:32.850190 kernel: Asymmetric key parser 'x509' registered Mar 4 01:08:32.850197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 01:08:32.850210 kernel: io scheduler mq-deadline registered Mar 4 01:08:32.850217 kernel: io scheduler kyber registered Mar 4 01:08:32.850224 kernel: io scheduler bfq registered Mar 4 01:08:32.850231 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 01:08:32.850239 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 01:08:32.850246 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 01:08:32.850253 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 01:08:32.850260 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:08:32.850267 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 01:08:32.850278 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 01:08:32.850285 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 01:08:32.850292 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 01:08:32.850594 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 4 01:08:32.850608 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 01:08:32.850760 kernel: rtc_cmos 00:04: registered as rtc0 Mar 4 01:08:32.850970 kernel: rtc_cmos 00:04: setting system clock to 2026-03-04T01:08:31 UTC (1772586511) Mar 4 01:08:32.851197 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 4 01:08:32.851216 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 4 01:08:32.851224 kernel: efifb: probing for efifb Mar 4 01:08:32.851231 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 4 01:08:32.851239 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 4 01:08:32.851246 kernel: efifb: scrolling: redraw Mar 4 01:08:32.851253 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 4 01:08:32.851260 kernel: Console: switching to colour frame buffer device 100x37 Mar 4 01:08:32.851266 kernel: fb0: EFI VGA frame buffer device Mar 4 01:08:32.851273 kernel: pstore: Using crash dump compression: deflate Mar 4 01:08:32.851284 kernel: pstore: Registered efi_pstore as persistent store backend Mar 4 01:08:32.851290 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:08:32.851297 kernel: Segment Routing with IPv6 Mar 4 01:08:32.851304 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:08:32.851311 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:08:32.851318 kernel: Key type dns_resolver registered Mar 4 01:08:32.851349 kernel: IPI shorthand broadcast: enabled Mar 4 01:08:32.851471 kernel: sched_clock: Marking stable (3803038022, 460568914)->(4915655141, -652048205) Mar 4 01:08:32.851480 kernel: registered taskstats version 1 Mar 4 01:08:32.851492 kernel: Loading compiled-in X.509 certificates Mar 4 01:08:32.851503 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 01:08:32.851510 kernel: Key type .fscrypt registered Mar 4 01:08:32.851517 kernel: Key type fscrypt-provisioning registered Mar 4 01:08:32.851525 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:08:32.851532 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:08:32.851539 kernel: ima: No architecture policies found Mar 4 01:08:32.851546 kernel: clk: Disabling unused clocks Mar 4 01:08:32.851556 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 01:08:32.851563 kernel: Write protecting the kernel read-only data: 36864k Mar 4 01:08:32.851570 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 01:08:32.851577 kernel: Run /init as init process Mar 4 01:08:32.851585 kernel: with arguments: Mar 4 01:08:32.851592 kernel: /init Mar 4 01:08:32.851599 kernel: with environment: Mar 4 01:08:32.851606 kernel: HOME=/ Mar 4 01:08:32.851645 kernel: TERM=linux Mar 4 01:08:32.851687 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:08:32.851702 systemd[1]: Detected virtualization kvm. Mar 4 01:08:32.851710 systemd[1]: Detected architecture x86-64. Mar 4 01:08:32.851718 systemd[1]: Running in initrd. Mar 4 01:08:32.851750 systemd[1]: No hostname configured, using default hostname. Mar 4 01:08:32.851758 systemd[1]: Hostname set to . Mar 4 01:08:32.851789 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:08:32.851801 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:08:32.851831 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:08:32.851839 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:08:32.851847 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:08:32.851855 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:08:32.851870 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:08:32.851877 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:08:32.851886 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:08:32.851895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:08:32.851909 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:08:32.851923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:08:32.851935 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:08:32.851951 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:08:32.851963 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:08:32.851975 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:08:32.852027 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:08:32.852036 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:08:32.852044 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:08:32.852052 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:08:32.852060 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:08:32.852068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:08:32.852080 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:08:32.852123 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:08:32.852131 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:08:32.852139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:08:32.852153 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:08:32.852166 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:08:32.852179 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:08:32.852192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:08:32.852211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:08:32.852284 systemd-journald[194]: Collecting audit messages is disabled. Mar 4 01:08:32.852303 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:08:32.852311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:08:32.852323 systemd-journald[194]: Journal started Mar 4 01:08:32.852339 systemd-journald[194]: Runtime Journal (/run/log/journal/7a43fd6af7004899ad170159e034a6da) is 6.0M, max 48.3M, 42.2M free. Mar 4 01:08:32.850019 systemd-modules-load[195]: Inserted module 'overlay' Mar 4 01:08:32.870444 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:08:32.875544 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:08:32.889746 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:08:32.898004 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:08:32.924778 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:08:32.927011 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:08:32.929607 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:08:32.946332 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:08:32.951605 kernel: Bridge firewalling registered Mar 4 01:08:32.947537 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 4 01:08:32.959596 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:08:32.968652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:08:32.978828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:08:33.004961 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:08:33.015543 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:08:33.060837 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:08:33.068542 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:08:33.101023 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:08:33.111917 dracut-cmdline[226]: dracut-dracut-053 Mar 4 01:08:33.111917 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:08:33.152735 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:08:33.204764 systemd-resolved[261]: Positive Trust Anchors: Mar 4 01:08:33.204936 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:08:33.204965 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:08:33.216355 systemd-resolved[261]: Defaulting to hostname 'linux'. Mar 4 01:08:33.223279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:08:33.248269 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:08:33.313521 kernel: SCSI subsystem initialized Mar 4 01:08:33.327538 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:08:33.346541 kernel: iscsi: registered transport (tcp) Mar 4 01:08:33.384171 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:08:33.384266 kernel: QLogic iSCSI HBA Driver Mar 4 01:08:33.455458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:08:33.471578 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:08:33.520708 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:08:33.520780 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:08:33.524634 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:08:33.579639 kernel: raid6: avx2x4 gen() 31104 MB/s Mar 4 01:08:33.597550 kernel: raid6: avx2x2 gen() 28044 MB/s Mar 4 01:08:33.617557 kernel: raid6: avx2x1 gen() 23187 MB/s Mar 4 01:08:33.617601 kernel: raid6: using algorithm avx2x4 gen() 31104 MB/s Mar 4 01:08:33.638275 kernel: raid6: .... xor() 4532 MB/s, rmw enabled Mar 4 01:08:33.638522 kernel: raid6: using avx2x2 recovery algorithm Mar 4 01:08:33.660512 kernel: xor: automatically using best checksumming function avx Mar 4 01:08:33.852492 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:08:33.873584 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:08:33.890788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:08:33.929325 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 4 01:08:33.942985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:08:33.954795 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:08:33.976705 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Mar 4 01:08:34.046878 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:08:34.062715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:08:34.227168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:08:34.243739 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:08:34.275007 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:08:34.292768 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:08:34.299859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:08:34.318294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:08:34.334574 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:08:34.352506 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 4 01:08:34.352913 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 01:08:34.375457 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 4 01:08:34.387559 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:08:34.408717 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 01:08:34.408843 kernel: GPT:9289727 != 19775487 Mar 4 01:08:34.408944 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 01:08:34.409019 kernel: GPT:9289727 != 19775487 Mar 4 01:08:34.409156 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 01:08:34.409220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:08:34.417604 kernel: libata version 3.00 loaded. Mar 4 01:08:34.420513 kernel: AVX2 version of gcm_enc/dec engaged. Mar 4 01:08:34.420732 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:08:34.423262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:08:34.438770 kernel: AES CTR mode by8 optimization enabled Mar 4 01:08:34.434323 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:08:34.443334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:08:34.497867 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 01:08:34.498231 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 01:08:34.498258 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 01:08:34.498658 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 01:08:34.498929 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Mar 4 01:08:34.498950 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (463) Mar 4 01:08:34.498971 kernel: scsi host0: ahci Mar 4 01:08:34.499315 kernel: scsi host1: ahci Mar 4 01:08:34.499708 kernel: scsi host2: ahci Mar 4 01:08:34.443482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:08:34.508437 kernel: scsi host3: ahci Mar 4 01:08:34.457647 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:08:34.514715 kernel: scsi host4: ahci Mar 4 01:08:34.485268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:08:34.518296 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 01:08:34.549312 kernel: scsi host5: ahci Mar 4 01:08:34.549736 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 4 01:08:34.549762 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 4 01:08:34.549782 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 4 01:08:34.549812 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 4 01:08:34.549831 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 4 01:08:34.549849 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 4 01:08:34.551227 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 01:08:34.564616 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 01:08:34.564760 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 01:08:34.592915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:08:34.617753 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:08:34.618291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:08:34.641174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:08:34.665762 disk-uuid[560]: Primary Header is updated. Mar 4 01:08:34.665762 disk-uuid[560]: Secondary Entries is updated. Mar 4 01:08:34.665762 disk-uuid[560]: Secondary Header is updated. Mar 4 01:08:34.688971 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:08:34.689772 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:08:34.864972 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 01:08:34.865054 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 01:08:34.868448 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 01:08:34.872512 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 4 01:08:34.872560 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 01:08:34.880546 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 01:08:34.880591 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 4 01:08:34.885083 kernel: ata3.00: applying bridge limits Mar 4 01:08:34.888548 kernel: ata3.00: configured for UDMA/100 Mar 4 01:08:34.888589 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 4 01:08:34.953754 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 4 01:08:34.955328 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 01:08:34.972467 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 4 01:08:35.707497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:08:35.708965 disk-uuid[563]: The operation has completed successfully. Mar 4 01:08:35.765145 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:08:35.765419 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:08:35.813083 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:08:35.826529 sh[598]: Success Mar 4 01:08:35.856550 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 4 01:08:36.008905 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:08:36.028695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:08:36.036495 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:08:36.064831 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 01:08:36.064886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:08:36.064898 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:08:36.077233 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:08:36.077281 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:08:36.094642 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:08:36.102254 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:08:36.119616 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:08:36.129437 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:08:36.152052 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:08:36.152133 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:08:36.152168 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:08:36.159468 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:08:36.179599 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:08:36.191892 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:08:36.199233 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:08:36.220638 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:08:36.354861 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:08:36.369744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:08:36.411186 systemd-networkd[784]: lo: Link UP Mar 4 01:08:36.411197 systemd-networkd[784]: lo: Gained carrier Mar 4 01:08:36.419159 systemd-networkd[784]: Enumeration completed Mar 4 01:08:36.420566 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:08:36.422228 systemd[1]: Reached target network.target - Network. Mar 4 01:08:36.422449 ignition[700]: Ignition 2.19.0 Mar 4 01:08:36.425002 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:08:36.422463 ignition[700]: Stage: fetch-offline Mar 4 01:08:36.425009 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:08:36.422561 ignition[700]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:08:36.427290 systemd-networkd[784]: eth0: Link UP Mar 4 01:08:36.422575 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:08:36.427297 systemd-networkd[784]: eth0: Gained carrier Mar 4 01:08:36.422790 ignition[700]: parsed url from cmdline: "" Mar 4 01:08:36.427309 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:08:36.422796 ignition[700]: no config URL provided Mar 4 01:08:36.422806 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:08:36.422824 ignition[700]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:08:36.422863 ignition[700]: op(1): [started] loading QEMU firmware config module Mar 4 01:08:36.496260 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:08:36.422871 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 4 01:08:36.443172 ignition[700]: op(1): [finished] loading QEMU firmware config module Mar 4 01:08:36.763617 ignition[700]: parsing config with SHA512: b9e420b372d71078fb80fa7cde9812959dabb0c1fddd2a3577ec52268578735e98dd5114604d7eaff4adab53139eee7fe6dce20aeef4b2a517d94643ff551644 Mar 4 01:08:36.804277 unknown[700]: fetched base config from "system" Mar 4 01:08:36.804343 unknown[700]: fetched user config from "qemu" Mar 4 01:08:36.805467 ignition[700]: fetch-offline: fetch-offline passed Mar 4 01:08:36.805578 ignition[700]: Ignition finished successfully Mar 4 01:08:36.818076 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:08:36.828084 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 4 01:08:36.845631 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:08:36.896727 ignition[791]: Ignition 2.19.0 Mar 4 01:08:36.896774 ignition[791]: Stage: kargs Mar 4 01:08:36.897139 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:08:36.897155 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:08:36.898710 ignition[791]: kargs: kargs passed Mar 4 01:08:36.898783 ignition[791]: Ignition finished successfully Mar 4 01:08:36.916836 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:08:36.941614 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:08:36.967670 ignition[799]: Ignition 2.19.0 Mar 4 01:08:36.967707 ignition[799]: Stage: disks Mar 4 01:08:36.967870 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:08:36.967883 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:08:36.968708 ignition[799]: disks: disks passed Mar 4 01:08:36.968756 ignition[799]: Ignition finished successfully Mar 4 01:08:36.993733 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:08:37.000753 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:08:37.000926 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:08:37.013063 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:08:37.020937 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:08:37.023281 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:08:37.054643 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:08:37.077277 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 4 01:08:37.090338 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:08:37.119575 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:08:37.275542 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 01:08:37.276892 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:08:37.287212 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:08:37.305595 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:08:37.318332 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:08:37.334512 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Mar 4 01:08:37.341579 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:08:37.342241 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:08:37.349222 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:08:37.352276 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 01:08:37.364783 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:08:37.352638 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:08:37.364788 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:08:37.396865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:08:37.401801 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:08:37.426839 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:08:37.527038 systemd-networkd[784]: eth0: Gained IPv6LL Mar 4 01:08:37.658143 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:08:37.669913 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:08:37.681959 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:08:37.693750 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:08:37.858130 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:08:37.876695 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:08:37.887922 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:08:37.900057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:08:37.906876 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:08:37.956693 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:08:38.035589 ignition[931]: INFO : Ignition 2.19.0 Mar 4 01:08:38.035589 ignition[931]: INFO : Stage: mount Mar 4 01:08:38.041021 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:08:38.041021 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:08:38.050561 ignition[931]: INFO : mount: mount passed Mar 4 01:08:38.053251 ignition[931]: INFO : Ignition finished successfully Mar 4 01:08:38.058245 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:08:38.069715 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:08:38.081412 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:08:38.107975 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Mar 4 01:08:38.108024 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:08:38.108046 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:08:38.113490 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:08:38.121503 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:08:38.124311 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:08:38.182430 ignition[961]: INFO : Ignition 2.19.0 Mar 4 01:08:38.186321 ignition[961]: INFO : Stage: files Mar 4 01:08:38.186321 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:08:38.186321 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:08:38.204342 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:08:38.212036 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:08:38.212036 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:08:38.232736 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:08:38.240987 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:08:38.240987 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:08:38.240987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:08:38.240987 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 01:08:38.234505 unknown[961]: wrote ssh authorized keys file for user: core Mar 4 01:08:38.311648 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 01:08:38.460767 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:08:38.466873 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 4 01:08:38.780818 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 4 01:08:39.309168 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 4 01:08:39.309168 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 4 01:08:39.323501 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:08:39.330516 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:08:39.330516 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 4 01:08:39.330516 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 4 01:08:39.349567 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:08:39.358296 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:08:39.358296 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 4 01:08:39.358296 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 4 01:08:39.414811 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:08:39.427860 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:08:39.432775 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 4 01:08:39.432775 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:08:39.432775 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:08:39.432775 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:08:39.432775 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:08:39.432775 ignition[961]: INFO : files: files passed Mar 4 01:08:39.432775 ignition[961]: INFO : Ignition finished successfully Mar 4 01:08:39.443474 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:08:39.475764 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:08:39.479793 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:08:39.493782 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:08:39.493969 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:08:39.504758 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 4 01:08:39.512556 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:08:39.512556 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:08:39.507683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:08:39.529970 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:08:39.513493 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:08:39.537872 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:08:39.581707 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:08:39.581978 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:08:39.590090 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:08:39.598669 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:08:39.598918 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:08:39.622726 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:08:39.641234 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:08:39.643312 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:08:39.664930 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:08:39.668846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:08:39.675950 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:08:39.682522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:08:39.682695 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:08:39.689718 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:08:39.696491 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:08:39.703749 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:08:39.709820 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:08:39.715883 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:08:39.722999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:08:39.729189 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:08:39.736464 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:08:39.742535 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:08:39.748986 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:08:39.749195 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:08:39.749430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:08:39.750238 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:08:39.751137 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:08:39.752029 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:08:39.835585 ignition[1015]: INFO : Ignition 2.19.0 Mar 4 01:08:39.835585 ignition[1015]: INFO : Stage: umount Mar 4 01:08:39.752286 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:08:39.851211 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:08:39.851211 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:08:39.851211 ignition[1015]: INFO : umount: umount passed Mar 4 01:08:39.851211 ignition[1015]: INFO : Ignition finished successfully Mar 4 01:08:39.753190 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:08:39.753327 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:08:39.755153 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:08:39.755292 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:08:39.756197 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:08:39.757289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:08:39.761524 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:08:39.762199 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:08:39.763215 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:08:39.764463 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:08:39.764595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:08:39.764798 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:08:39.764910 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:08:39.765320 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:08:39.765568 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:08:39.765779 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:08:39.765955 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:08:39.809810 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:08:39.813611 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:08:39.813779 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:08:39.825865 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:08:39.831470 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:08:39.831754 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:08:39.851211 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:08:39.865128 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:08:39.955683 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:08:39.959740 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:08:39.962534 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:08:39.969874 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:08:39.973343 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:08:39.981797 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:08:39.984916 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:08:39.996684 systemd[1]: Stopped target network.target - Network. Mar 4 01:08:40.000673 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:08:40.007594 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:08:40.015205 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:08:40.015315 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:08:40.026236 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:08:40.026345 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:08:40.036319 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:08:40.036532 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:08:40.047046 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:08:40.047158 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:08:40.056046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:08:40.063435 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:08:40.068548 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 4 01:08:40.073963 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:08:40.077722 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:08:40.085977 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:08:40.088918 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:08:40.097767 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:08:40.097878 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:08:40.117652 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:08:40.120872 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:08:40.120968 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:08:40.128237 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:08:40.128312 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:08:40.134887 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:08:40.134975 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:08:40.138958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:08:40.139018 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:08:40.152068 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:08:40.178611 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:08:40.178918 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:08:40.185653 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:08:40.185799 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:08:40.192925 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:08:40.193006 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:08:40.198465 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:08:40.198514 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:08:40.198641 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:08:40.198698 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:08:40.200182 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:08:40.200266 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:08:40.202055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:08:40.202141 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:08:40.224749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:08:40.231248 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:08:40.231313 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:08:40.236612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:08:40.236672 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:08:40.244164 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:08:40.244306 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:08:40.250336 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:08:40.276656 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:08:40.290857 systemd[1]: Switching root. Mar 4 01:08:40.332556 systemd-journald[194]: Journal stopped Mar 4 01:08:41.875030 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 4 01:08:41.875172 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 01:08:41.875189 kernel: SELinux: policy capability open_perms=1 Mar 4 01:08:41.875233 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 01:08:41.875245 kernel: SELinux: policy capability always_check_network=0 Mar 4 01:08:41.875256 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 01:08:41.875268 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 01:08:41.875280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 01:08:41.875291 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 01:08:41.875302 kernel: audit: type=1403 audit(1772586520.541:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 01:08:41.875314 systemd[1]: Successfully loaded SELinux policy in 61.471ms. Mar 4 01:08:41.875340 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.186ms. Mar 4 01:08:41.875401 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:08:41.875417 systemd[1]: Detected virtualization kvm. Mar 4 01:08:41.875429 systemd[1]: Detected architecture x86-64. Mar 4 01:08:41.875441 systemd[1]: Detected first boot. Mar 4 01:08:41.875453 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:08:41.875466 zram_generator::config[1059]: No configuration found. Mar 4 01:08:41.875478 systemd[1]: Populated /etc with preset unit settings. Mar 4 01:08:41.875491 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 01:08:41.875506 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 01:08:41.875519 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 01:08:41.875532 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 01:08:41.875543 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 01:08:41.875555 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 01:08:41.875567 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 01:08:41.875580 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 01:08:41.875598 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 01:08:41.875609 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 01:08:41.875624 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 01:08:41.875637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:08:41.875649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:08:41.875660 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 01:08:41.875672 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 01:08:41.875684 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 01:08:41.875696 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:08:41.875707 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 01:08:41.875722 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:08:41.875734 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 01:08:41.875745 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 01:08:41.875757 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 01:08:41.875769 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 01:08:41.875785 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:08:41.875797 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:08:41.875809 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:08:41.875823 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:08:41.875835 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 01:08:41.875848 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 01:08:41.875861 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:08:41.875872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:08:41.875883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:08:41.875895 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 01:08:41.875907 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 01:08:41.875919 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 01:08:41.875930 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 01:08:41.875945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:08:41.875957 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 01:08:41.875968 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 01:08:41.875980 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 01:08:41.875992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 01:08:41.876004 systemd[1]: Reached target machines.target - Containers. Mar 4 01:08:41.876016 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 01:08:41.876027 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:08:41.876042 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:08:41.876054 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 01:08:41.876065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:08:41.876076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:08:41.876088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:08:41.876131 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 01:08:41.876144 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:08:41.876156 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 01:08:41.876172 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 01:08:41.876183 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 01:08:41.876195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 01:08:41.876206 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 01:08:41.876218 kernel: fuse: init (API version 7.39) Mar 4 01:08:41.876229 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:08:41.876241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:08:41.876252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 01:08:41.876264 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 01:08:41.876316 systemd-journald[1143]: Collecting audit messages is disabled. Mar 4 01:08:41.876338 kernel: loop: module loaded Mar 4 01:08:41.876350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:08:41.876414 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 01:08:41.876428 systemd[1]: Stopped verity-setup.service. Mar 4 01:08:41.876441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:08:41.876452 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 01:08:41.876465 systemd-journald[1143]: Journal started Mar 4 01:08:41.876490 systemd-journald[1143]: Runtime Journal (/run/log/journal/7a43fd6af7004899ad170159e034a6da) is 6.0M, max 48.3M, 42.2M free. Mar 4 01:08:41.333902 systemd[1]: Queued start job for default target multi-user.target. Mar 4 01:08:41.358469 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 01:08:41.359325 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 01:08:41.359825 systemd[1]: systemd-journald.service: Consumed 1.481s CPU time. Mar 4 01:08:41.880476 kernel: ACPI: bus type drm_connector registered Mar 4 01:08:41.880523 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:08:41.887238 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 01:08:41.890580 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 01:08:41.893790 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 01:08:41.897215 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 01:08:41.900742 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 01:08:41.904708 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 01:08:41.908742 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:08:41.912954 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 01:08:41.913441 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 01:08:41.917329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:08:41.917709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:08:41.922067 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:08:41.922465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:08:41.926617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:08:41.927144 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:08:41.931307 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 01:08:41.931633 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 01:08:41.935226 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:08:41.935630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:08:41.939187 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:08:41.943139 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 01:08:41.947582 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 01:08:41.965259 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 01:08:41.978595 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 01:08:41.983610 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 01:08:41.986726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 01:08:41.986764 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:08:41.990724 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 01:08:41.996678 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 01:08:42.001582 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 01:08:42.005550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:08:42.007913 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 01:08:42.017422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 01:08:42.021318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:08:42.024793 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 01:08:42.028762 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:08:42.032523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:08:42.038344 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 01:08:42.043251 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 01:08:42.048542 systemd-journald[1143]: Time spent on flushing to /var/log/journal/7a43fd6af7004899ad170159e034a6da is 28.507ms for 981 entries. Mar 4 01:08:42.048542 systemd-journald[1143]: System Journal (/var/log/journal/7a43fd6af7004899ad170159e034a6da) is 8.0M, max 195.6M, 187.6M free. Mar 4 01:08:42.108169 systemd-journald[1143]: Received client request to flush runtime journal. Mar 4 01:08:42.108208 kernel: loop0: detected capacity change from 0 to 142488 Mar 4 01:08:42.050426 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:08:42.056573 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 01:08:42.060539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 01:08:42.065258 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 01:08:42.078593 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 01:08:42.085749 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 01:08:42.102635 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 01:08:42.118692 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 01:08:42.125000 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 01:08:42.142209 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:08:42.153746 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 01:08:42.165057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 01:08:42.168151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 01:08:42.192942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:08:42.198860 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 01:08:42.201491 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 01:08:42.216530 kernel: loop1: detected capacity change from 0 to 140768 Mar 4 01:08:42.241629 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 4 01:08:42.241660 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 4 01:08:42.253039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:08:42.279448 kernel: loop2: detected capacity change from 0 to 219192 Mar 4 01:08:42.361778 kernel: loop3: detected capacity change from 0 to 142488 Mar 4 01:08:42.393509 kernel: loop4: detected capacity change from 0 to 140768 Mar 4 01:08:42.426463 kernel: loop5: detected capacity change from 0 to 219192 Mar 4 01:08:42.446759 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 4 01:08:42.448032 (sd-merge)[1198]: Merged extensions into '/usr'. Mar 4 01:08:42.455080 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 01:08:42.455139 systemd[1]: Reloading... Mar 4 01:08:42.516442 zram_generator::config[1223]: No configuration found. Mar 4 01:08:42.608543 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 01:08:42.702700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:08:42.749165 systemd[1]: Reloading finished in 293 ms. Mar 4 01:08:42.783965 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 01:08:42.789286 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 01:08:42.794687 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 01:08:42.822776 systemd[1]: Starting ensure-sysext.service... Mar 4 01:08:42.827971 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:08:42.835038 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:08:42.838150 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Mar 4 01:08:42.838188 systemd[1]: Reloading... Mar 4 01:08:42.868450 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 01:08:42.868995 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 01:08:42.870767 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 01:08:42.871288 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 4 01:08:42.871487 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 4 01:08:42.877990 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:08:42.878031 systemd-tmpfiles[1264]: Skipping /boot Mar 4 01:08:42.894833 systemd-udevd[1265]: Using default interface naming scheme 'v255'. Mar 4 01:08:42.902084 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:08:42.902263 systemd-tmpfiles[1264]: Skipping /boot Mar 4 01:08:42.921425 zram_generator::config[1290]: No configuration found. Mar 4 01:08:43.012450 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1321) Mar 4 01:08:43.090552 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 4 01:08:43.117044 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 4 01:08:43.117569 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 01:08:43.117842 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 01:08:43.118212 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 01:08:43.127426 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 4 01:08:43.127487 kernel: ACPI: button: Power Button [PWRF] Mar 4 01:08:43.136308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:08:43.179546 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 01:08:43.234151 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:08:43.241625 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 01:08:43.242086 systemd[1]: Reloading finished in 403 ms. Mar 4 01:08:43.342422 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:08:43.366999 kernel: kvm_amd: TSC scaling supported Mar 4 01:08:43.367154 kernel: kvm_amd: Nested Virtualization enabled Mar 4 01:08:43.367185 kernel: kvm_amd: Nested Paging enabled Mar 4 01:08:43.367225 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 4 01:08:43.367281 kernel: kvm_amd: PMU virtualization is disabled Mar 4 01:08:43.391839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:08:43.445662 systemd[1]: Finished ensure-sysext.service. Mar 4 01:08:43.452416 kernel: EDAC MC: Ver: 3.0.0 Mar 4 01:08:43.489263 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 01:08:43.495237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:08:43.510701 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:08:43.517268 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 01:08:43.521037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:08:43.523051 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 01:08:43.528855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:08:43.543737 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:08:43.548827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:08:43.554818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:08:43.558220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:08:43.560891 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 01:08:43.565855 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 01:08:43.570736 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:08:43.571726 augenrules[1386]: No rules Mar 4 01:08:43.576409 lvm[1366]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:08:43.587567 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:08:43.594732 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 01:08:43.597260 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 01:08:43.600296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:08:43.601234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:08:43.603541 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:08:43.604675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:08:43.605036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:08:43.607731 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:08:43.607999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:08:43.610626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:08:43.611064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:08:43.612734 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:08:43.613074 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:08:43.629714 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 01:08:43.638642 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:08:43.638933 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:08:43.642686 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 01:08:43.650254 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 01:08:43.654018 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 01:08:43.658889 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 01:08:43.670350 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:08:43.672718 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 01:08:43.676462 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 01:08:43.677209 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 01:08:43.685767 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:08:43.693870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 01:08:43.694961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:08:43.719230 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 01:08:43.723980 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 01:08:43.757095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:08:43.798932 systemd-networkd[1387]: lo: Link UP Mar 4 01:08:43.798952 systemd-networkd[1387]: lo: Gained carrier Mar 4 01:08:43.798973 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 01:08:43.802274 systemd-networkd[1387]: Enumeration completed Mar 4 01:08:43.803346 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:08:43.803729 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:08:43.803811 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:08:43.805289 systemd-networkd[1387]: eth0: Link UP Mar 4 01:08:43.805350 systemd-networkd[1387]: eth0: Gained carrier Mar 4 01:08:43.805452 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:08:43.807450 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 01:08:43.815882 systemd-resolved[1392]: Positive Trust Anchors: Mar 4 01:08:43.815923 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:08:43.815950 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:08:43.820485 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:08:43.820578 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 01:08:43.821224 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Mar 4 01:08:43.822586 systemd-resolved[1392]: Defaulting to hostname 'linux'. Mar 4 01:08:43.823300 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 4 01:08:43.823350 systemd-timesyncd[1393]: Initial clock synchronization to Wed 2026-03-04 01:08:43.635809 UTC. Mar 4 01:08:43.825916 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:08:43.831443 systemd[1]: Reached target network.target - Network. Mar 4 01:08:43.835700 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:08:43.841339 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:08:43.846656 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 01:08:43.852572 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 01:08:43.858630 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 01:08:43.864079 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 01:08:43.869998 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 01:08:43.875768 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 01:08:43.875843 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:08:43.879693 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:08:43.884589 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 01:08:43.891172 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 01:08:43.904340 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 01:08:43.909346 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 01:08:43.914309 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:08:43.918549 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:08:43.922718 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:08:43.922785 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:08:43.935546 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 01:08:43.942887 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 01:08:43.949619 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 01:08:43.956564 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 01:08:43.961729 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 01:08:43.964649 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 01:08:43.973522 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 01:08:43.981300 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 01:08:43.988072 dbus-daemon[1430]: [system] SELinux support is enabled Mar 4 01:08:43.989006 jq[1431]: false Mar 4 01:08:43.990802 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 01:08:43.991540 extend-filesystems[1432]: Found loop3 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found loop4 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found loop5 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found sr0 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found vda Mar 4 01:08:43.991540 extend-filesystems[1432]: Found vda1 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found vda2 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found vda3 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found usr Mar 4 01:08:43.991540 extend-filesystems[1432]: Found vda4 Mar 4 01:08:43.991540 extend-filesystems[1432]: Found vda6 Mar 4 01:08:44.045182 extend-filesystems[1432]: Found vda7 Mar 4 01:08:44.045182 extend-filesystems[1432]: Found vda9 Mar 4 01:08:44.045182 extend-filesystems[1432]: Checking size of /dev/vda9 Mar 4 01:08:44.001648 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 01:08:44.003459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 01:08:44.004204 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 01:08:44.005535 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 01:08:44.013491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 01:08:44.042564 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 01:08:44.050613 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 01:08:44.053465 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 01:08:44.053978 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 01:08:44.054228 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 01:08:44.062604 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 01:08:44.063176 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 01:08:44.072104 extend-filesystems[1432]: Resized partition /dev/vda9 Mar 4 01:08:44.075475 jq[1450]: true Mar 4 01:08:44.079932 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Mar 4 01:08:44.090406 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 4 01:08:44.097926 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 01:08:44.098685 update_engine[1446]: I20260304 01:08:44.098217 1446 main.cc:92] Flatcar Update Engine starting Mar 4 01:08:44.097971 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 01:08:44.108229 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1335) Mar 4 01:08:44.110615 update_engine[1446]: I20260304 01:08:44.110566 1446 update_check_scheduler.cc:74] Next update check in 6m19s Mar 4 01:08:44.112826 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 01:08:44.112855 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 01:08:44.122111 tar[1452]: linux-amd64/LICENSE Mar 4 01:08:44.124522 tar[1452]: linux-amd64/helm Mar 4 01:08:44.129850 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Mar 4 01:08:44.129913 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 01:08:44.131155 systemd-logind[1444]: New seat seat0. Mar 4 01:08:44.139951 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 01:08:44.141156 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 01:08:44.148086 systemd[1]: Started update-engine.service - Update Engine. Mar 4 01:08:44.153655 jq[1461]: true Mar 4 01:08:44.166586 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 4 01:08:44.174983 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 01:08:44.193563 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 01:08:44.193563 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 4 01:08:44.193563 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 4 01:08:44.201595 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Mar 4 01:08:44.200711 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 01:08:44.202132 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 01:08:44.200945 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 01:08:44.238049 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 01:08:44.245156 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 01:08:44.249778 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 01:08:44.258545 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:08:44.261393 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 01:08:44.268254 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 01:08:44.268603 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 01:08:44.275246 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 01:08:44.281740 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 01:08:44.303712 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 01:08:44.315811 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 01:08:44.324470 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 01:08:44.328587 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 01:08:44.416858 containerd[1462]: time="2026-03-04T01:08:44.416607614Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 01:08:44.448628 containerd[1462]: time="2026-03-04T01:08:44.448425915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:08:44.452767 containerd[1462]: time="2026-03-04T01:08:44.452653574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:08:44.452767 containerd[1462]: time="2026-03-04T01:08:44.452732367Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 01:08:44.452767 containerd[1462]: time="2026-03-04T01:08:44.452759696Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 01:08:44.453102 containerd[1462]: time="2026-03-04T01:08:44.453019508Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 01:08:44.453102 containerd[1462]: time="2026-03-04T01:08:44.453081414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 01:08:44.453216 containerd[1462]: time="2026-03-04T01:08:44.453182565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:08:44.453216 containerd[1462]: time="2026-03-04T01:08:44.453200480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:08:44.453765 containerd[1462]: time="2026-03-04T01:08:44.453654455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:08:44.453765 containerd[1462]: time="2026-03-04T01:08:44.453718405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 01:08:44.453765 containerd[1462]: time="2026-03-04T01:08:44.453742122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:08:44.453765 containerd[1462]: time="2026-03-04T01:08:44.453757435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 01:08:44.453965 containerd[1462]: time="2026-03-04T01:08:44.453896598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:08:44.454487 containerd[1462]: time="2026-03-04T01:08:44.454425118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:08:44.454758 containerd[1462]: time="2026-03-04T01:08:44.454646391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:08:44.454758 containerd[1462]: time="2026-03-04T01:08:44.454697328Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 01:08:44.454918 containerd[1462]: time="2026-03-04T01:08:44.454843223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 01:08:44.455067 containerd[1462]: time="2026-03-04T01:08:44.455004664Z" level=info msg="metadata content store policy set" policy=shared Mar 4 01:08:44.461912 containerd[1462]: time="2026-03-04T01:08:44.460909500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 01:08:44.461912 containerd[1462]: time="2026-03-04T01:08:44.461011786Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 01:08:44.461912 containerd[1462]: time="2026-03-04T01:08:44.461043487Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 01:08:44.461912 containerd[1462]: time="2026-03-04T01:08:44.461112075Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 01:08:44.461912 containerd[1462]: time="2026-03-04T01:08:44.461143326Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 01:08:44.461912 containerd[1462]: time="2026-03-04T01:08:44.461563368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 01:08:44.464276 containerd[1462]: time="2026-03-04T01:08:44.464210666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 01:08:44.465334 containerd[1462]: time="2026-03-04T01:08:44.465298651Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 01:08:44.465571 containerd[1462]: time="2026-03-04T01:08:44.465544629Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 01:08:44.465796 containerd[1462]: time="2026-03-04T01:08:44.465645956Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 01:08:44.465992 containerd[1462]: time="2026-03-04T01:08:44.465876963Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.466152 containerd[1462]: time="2026-03-04T01:08:44.466126407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.466311 containerd[1462]: time="2026-03-04T01:08:44.466285089Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.466506 containerd[1462]: time="2026-03-04T01:08:44.466478476Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.466666 containerd[1462]: time="2026-03-04T01:08:44.466639477Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.466817 containerd[1462]: time="2026-03-04T01:08:44.466735110Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.466997 containerd[1462]: time="2026-03-04T01:08:44.466971166Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.467167 containerd[1462]: time="2026-03-04T01:08:44.467140876Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 01:08:44.467323 containerd[1462]: time="2026-03-04T01:08:44.467297915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.467493 containerd[1462]: time="2026-03-04T01:08:44.467469093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.467625 containerd[1462]: time="2026-03-04T01:08:44.467598020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.467731 containerd[1462]: time="2026-03-04T01:08:44.467707645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.467817 containerd[1462]: time="2026-03-04T01:08:44.467795812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.467895993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.467975862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468007163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468030664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468057200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468077757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468097795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468119565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468158624Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468199514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468220697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468240412Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468319792Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 01:08:44.469423 containerd[1462]: time="2026-03-04T01:08:44.468415600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 01:08:44.469913 containerd[1462]: time="2026-03-04T01:08:44.468439718Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 01:08:44.469913 containerd[1462]: time="2026-03-04T01:08:44.468460941Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 01:08:44.469913 containerd[1462]: time="2026-03-04T01:08:44.468481821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.469913 containerd[1462]: time="2026-03-04T01:08:44.468516771Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 01:08:44.469913 containerd[1462]: time="2026-03-04T01:08:44.468555037Z" level=info msg="NRI interface is disabled by configuration." Mar 4 01:08:44.469913 containerd[1462]: time="2026-03-04T01:08:44.468584145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 01:08:44.470115 containerd[1462]: time="2026-03-04T01:08:44.469073382Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 01:08:44.470115 containerd[1462]: time="2026-03-04T01:08:44.469168995Z" level=info msg="Connect containerd service" Mar 4 01:08:44.470115 containerd[1462]: time="2026-03-04T01:08:44.469235988Z" level=info msg="using legacy CRI server" Mar 4 01:08:44.470115 containerd[1462]: time="2026-03-04T01:08:44.469250010Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 01:08:44.470926 containerd[1462]: time="2026-03-04T01:08:44.470886666Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 01:08:44.472548 containerd[1462]: time="2026-03-04T01:08:44.472508735Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:08:44.472971 containerd[1462]: time="2026-03-04T01:08:44.472857820Z" level=info msg="Start subscribing containerd event" Mar 4 01:08:44.473185 containerd[1462]: time="2026-03-04T01:08:44.473011688Z" level=info msg="Start recovering state" Mar 4 01:08:44.473469 containerd[1462]: time="2026-03-04T01:08:44.473412836Z" level=info msg="Start event monitor" Mar 4 01:08:44.473524 containerd[1462]: time="2026-03-04T01:08:44.473490876Z" level=info msg="Start snapshots syncer" Mar 4 01:08:44.473524 containerd[1462]: time="2026-03-04T01:08:44.473511834Z" level=info msg="Start cni network conf syncer for default" Mar 4 01:08:44.473813 containerd[1462]: time="2026-03-04T01:08:44.473520346Z" level=info msg="Start streaming server" Mar 4 01:08:44.474107 containerd[1462]: time="2026-03-04T01:08:44.474077049Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 01:08:44.474317 containerd[1462]: time="2026-03-04T01:08:44.474292343Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 01:08:44.474611 containerd[1462]: time="2026-03-04T01:08:44.474586019Z" level=info msg="containerd successfully booted in 0.060783s" Mar 4 01:08:44.474688 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 01:08:44.662777 tar[1452]: linux-amd64/README.md Mar 4 01:08:44.688432 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 01:08:45.717654 systemd-networkd[1387]: eth0: Gained IPv6LL Mar 4 01:08:45.721466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 01:08:45.726681 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 01:08:45.740746 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 4 01:08:45.745627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:08:45.750805 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 01:08:45.780421 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 4 01:08:45.780864 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 4 01:08:45.785964 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 01:08:45.792045 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 01:08:46.521401 kernel: hrtimer: interrupt took 10143396 ns Mar 4 01:08:47.174313 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 01:08:47.187406 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:55054.service - OpenSSH per-connection server daemon (10.0.0.1:55054). Mar 4 01:08:48.106650 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 55054 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:08:48.119780 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:48.153234 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 01:08:48.169911 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 01:08:48.178967 systemd-logind[1444]: New session 1 of user core. Mar 4 01:08:48.214889 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 01:08:48.235058 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 01:08:48.244006 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 01:08:48.407746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:08:48.411904 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:08:48.415758 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:08:48.472235 systemd[1542]: Queued start job for default target default.target. Mar 4 01:08:48.482831 systemd[1542]: Created slice app.slice - User Application Slice. Mar 4 01:08:48.482905 systemd[1542]: Reached target paths.target - Paths. Mar 4 01:08:48.482932 systemd[1542]: Reached target timers.target - Timers. Mar 4 01:08:48.485765 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 01:08:48.505880 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 01:08:48.506201 systemd[1542]: Reached target sockets.target - Sockets. Mar 4 01:08:48.506264 systemd[1542]: Reached target basic.target - Basic System. Mar 4 01:08:48.506543 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 01:08:48.506763 systemd[1542]: Reached target default.target - Main User Target. Mar 4 01:08:48.506858 systemd[1542]: Startup finished in 190ms. Mar 4 01:08:48.805005 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:08:48.809466 systemd[1]: Startup finished in 4.156s (kernel) + 8.455s (initrd) + 8.327s (userspace) = 20.940s. Mar 4 01:08:48.906998 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:55056.service - OpenSSH per-connection server daemon (10.0.0.1:55056). Mar 4 01:08:48.956834 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 55056 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:08:48.961335 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:48.970727 systemd-logind[1444]: New session 2 of user core. Mar 4 01:08:48.991079 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:08:49.183561 sshd[1568]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:49.196489 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:55056.service: Deactivated successfully. Mar 4 01:08:49.199539 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 01:08:49.203548 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Mar 4 01:08:49.215066 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Mar 4 01:08:49.216904 systemd-logind[1444]: Removed session 2. Mar 4 01:08:49.259979 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:08:49.265432 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:49.273904 systemd-logind[1444]: New session 3 of user core. Mar 4 01:08:49.283892 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:08:49.555522 sshd[1575]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:49.598422 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:55068.service: Deactivated successfully. Mar 4 01:08:49.601264 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 01:08:49.604668 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Mar 4 01:08:49.620708 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:55084.service - OpenSSH per-connection server daemon (10.0.0.1:55084). Mar 4 01:08:49.622846 systemd-logind[1444]: Removed session 3. Mar 4 01:08:49.662178 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 55084 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:08:49.666343 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:49.677764 systemd-logind[1444]: New session 4 of user core. Mar 4 01:08:49.687657 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:08:49.852061 sshd[1583]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:49.863865 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:55084.service: Deactivated successfully. Mar 4 01:08:49.867055 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:08:49.869942 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:08:49.890543 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:55086.service - OpenSSH per-connection server daemon (10.0.0.1:55086). Mar 4 01:08:49.892022 systemd-logind[1444]: Removed session 4. Mar 4 01:08:49.926790 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 55086 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:08:49.929577 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:49.935674 systemd-logind[1444]: New session 5 of user core. Mar 4 01:08:49.944578 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:08:49.988034 kubelet[1553]: E0304 01:08:49.987899 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:08:49.992827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:08:49.993124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:08:49.993737 systemd[1]: kubelet.service: Consumed 4.051s CPU time. Mar 4 01:08:50.011989 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:08:50.012684 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:08:50.029224 sudo[1593]: pam_unix(sudo:session): session closed for user root Mar 4 01:08:50.031434 sshd[1590]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:50.045750 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:55086.service: Deactivated successfully. Mar 4 01:08:50.053776 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:08:50.057211 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:08:50.068838 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:55094.service - OpenSSH per-connection server daemon (10.0.0.1:55094). Mar 4 01:08:50.070911 systemd-logind[1444]: Removed session 5. Mar 4 01:08:50.328060 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 55094 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:08:50.330894 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:50.337953 systemd-logind[1444]: New session 6 of user core. Mar 4 01:08:50.350664 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:08:50.412952 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:08:50.413563 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:08:50.421239 sudo[1603]: pam_unix(sudo:session): session closed for user root Mar 4 01:08:50.430913 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:08:50.431428 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:08:50.454933 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:08:50.463543 auditctl[1606]: No rules Mar 4 01:08:50.465570 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:08:50.471341 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:08:50.479774 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:08:50.553747 augenrules[1624]: No rules Mar 4 01:08:50.556208 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:08:50.557784 sudo[1602]: pam_unix(sudo:session): session closed for user root Mar 4 01:08:50.560467 sshd[1599]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:50.574681 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:55094.service: Deactivated successfully. Mar 4 01:08:50.576985 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:08:50.579268 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:08:50.595019 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:55102.service - OpenSSH per-connection server daemon (10.0.0.1:55102). Mar 4 01:08:50.596962 systemd-logind[1444]: Removed session 6. Mar 4 01:08:50.638342 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 55102 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:08:50.641211 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:50.649501 systemd-logind[1444]: New session 7 of user core. Mar 4 01:08:50.659737 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:08:50.736201 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:08:50.736724 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:08:53.081862 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:08:53.085887 (dockerd)[1653]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:08:56.287817 dockerd[1653]: time="2026-03-04T01:08:56.286967100Z" level=info msg="Starting up" Mar 4 01:08:56.994153 systemd[1]: var-lib-docker-metacopy\x2dcheck2093842623-merged.mount: Deactivated successfully. Mar 4 01:08:57.041655 dockerd[1653]: time="2026-03-04T01:08:57.041495132Z" level=info msg="Loading containers: start." Mar 4 01:08:57.384632 kernel: Initializing XFRM netlink socket Mar 4 01:08:57.732540 systemd-networkd[1387]: docker0: Link UP Mar 4 01:08:57.768637 dockerd[1653]: time="2026-03-04T01:08:57.768309232Z" level=info msg="Loading containers: done." Mar 4 01:08:57.893241 dockerd[1653]: time="2026-03-04T01:08:57.893006385Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:08:57.893954 dockerd[1653]: time="2026-03-04T01:08:57.893833093Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:08:57.894289 dockerd[1653]: time="2026-03-04T01:08:57.894196819Z" level=info msg="Daemon has completed initialization" Mar 4 01:08:57.985805 dockerd[1653]: time="2026-03-04T01:08:57.985340666Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:08:57.986082 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:08:59.865755 containerd[1462]: time="2026-03-04T01:08:59.864716595Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 4 01:09:00.245604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:09:00.254751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:09:01.151792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505439944.mount: Deactivated successfully. Mar 4 01:09:01.227898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:09:01.247021 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:09:01.438564 kubelet[1820]: E0304 01:09:01.437422 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:09:01.445994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:09:01.446286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:09:01.446841 systemd[1]: kubelet.service: Consumed 1.147s CPU time. Mar 4 01:09:03.478783 containerd[1462]: time="2026-03-04T01:09:03.478529334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:03.479849 containerd[1462]: time="2026-03-04T01:09:03.479198997Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 4 01:09:03.480817 containerd[1462]: time="2026-03-04T01:09:03.480737032Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:03.485709 containerd[1462]: time="2026-03-04T01:09:03.485653408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:03.487052 containerd[1462]: time="2026-03-04T01:09:03.487002170Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 3.622227948s" Mar 4 01:09:03.487052 containerd[1462]: time="2026-03-04T01:09:03.487045321Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 4 01:09:03.491595 containerd[1462]: time="2026-03-04T01:09:03.491555375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 4 01:09:06.110729 containerd[1462]: time="2026-03-04T01:09:06.110279837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:06.111853 containerd[1462]: time="2026-03-04T01:09:06.111131876Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 4 01:09:06.112911 containerd[1462]: time="2026-03-04T01:09:06.112828452Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:06.118912 containerd[1462]: time="2026-03-04T01:09:06.118671578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:06.121066 containerd[1462]: time="2026-03-04T01:09:06.120984294Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 2.62937882s" Mar 4 01:09:06.121162 containerd[1462]: time="2026-03-04T01:09:06.121065975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 4 01:09:06.133307 containerd[1462]: time="2026-03-04T01:09:06.132790065Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 4 01:09:08.182464 containerd[1462]: time="2026-03-04T01:09:08.182110589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:08.183947 containerd[1462]: time="2026-03-04T01:09:08.183481839Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 4 01:09:08.185288 containerd[1462]: time="2026-03-04T01:09:08.185208850Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:08.190556 containerd[1462]: time="2026-03-04T01:09:08.190341005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:08.193026 containerd[1462]: time="2026-03-04T01:09:08.192899806Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.059925287s" Mar 4 01:09:08.193026 containerd[1462]: time="2026-03-04T01:09:08.193020868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 4 01:09:08.201916 containerd[1462]: time="2026-03-04T01:09:08.201812901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 4 01:09:09.257697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370318436.mount: Deactivated successfully. Mar 4 01:09:09.624463 containerd[1462]: time="2026-03-04T01:09:09.624072444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:09.626987 containerd[1462]: time="2026-03-04T01:09:09.626801827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 4 01:09:09.628720 containerd[1462]: time="2026-03-04T01:09:09.628650886Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:09.632237 containerd[1462]: time="2026-03-04T01:09:09.632122656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:09.633420 containerd[1462]: time="2026-03-04T01:09:09.633289020Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.431392131s" Mar 4 01:09:09.633486 containerd[1462]: time="2026-03-04T01:09:09.633424743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 4 01:09:09.634482 containerd[1462]: time="2026-03-04T01:09:09.634430596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 4 01:09:10.543551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829697655.mount: Deactivated successfully. Mar 4 01:09:11.709195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:09:11.716813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:09:12.449810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:09:12.450070 (kubelet)[1953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:09:12.725632 kubelet[1953]: E0304 01:09:12.724572 1953 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:09:12.731186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:09:12.731680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:09:12.732518 systemd[1]: kubelet.service: Consumed 1.111s CPU time. Mar 4 01:09:13.410081 containerd[1462]: time="2026-03-04T01:09:13.409597497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:13.411740 containerd[1462]: time="2026-03-04T01:09:13.410642819Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 4 01:09:13.411977 containerd[1462]: time="2026-03-04T01:09:13.411887922Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:13.416218 containerd[1462]: time="2026-03-04T01:09:13.416013250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:13.417851 containerd[1462]: time="2026-03-04T01:09:13.417773286Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.783310244s" Mar 4 01:09:13.417851 containerd[1462]: time="2026-03-04T01:09:13.417831647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 4 01:09:13.421665 containerd[1462]: time="2026-03-04T01:09:13.421607768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 4 01:09:13.945979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350175614.mount: Deactivated successfully. Mar 4 01:09:13.953496 containerd[1462]: time="2026-03-04T01:09:13.953327094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:13.954459 containerd[1462]: time="2026-03-04T01:09:13.954310482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 4 01:09:13.956079 containerd[1462]: time="2026-03-04T01:09:13.955978536Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:13.961003 containerd[1462]: time="2026-03-04T01:09:13.960849779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:13.962161 containerd[1462]: time="2026-03-04T01:09:13.962062142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 540.393689ms" Mar 4 01:09:13.962161 containerd[1462]: time="2026-03-04T01:09:13.962110117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 4 01:09:13.965100 containerd[1462]: time="2026-03-04T01:09:13.964993320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 4 01:09:14.613448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237774376.mount: Deactivated successfully. Mar 4 01:09:16.453961 containerd[1462]: time="2026-03-04T01:09:16.453546886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:16.455273 containerd[1462]: time="2026-03-04T01:09:16.454501039Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 4 01:09:16.456036 containerd[1462]: time="2026-03-04T01:09:16.455960131Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:16.459761 containerd[1462]: time="2026-03-04T01:09:16.459668228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:16.461620 containerd[1462]: time="2026-03-04T01:09:16.461477487Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 2.496453824s" Mar 4 01:09:16.461685 containerd[1462]: time="2026-03-04T01:09:16.461600577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 4 01:09:20.739985 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:09:20.740193 systemd[1]: kubelet.service: Consumed 1.111s CPU time. Mar 4 01:09:20.751990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:09:20.785172 systemd[1]: Reloading requested from client PID 2058 ('systemctl') (unit session-7.scope)... Mar 4 01:09:20.785219 systemd[1]: Reloading... Mar 4 01:09:20.895479 zram_generator::config[2097]: No configuration found. Mar 4 01:09:21.016484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:09:21.138322 systemd[1]: Reloading finished in 352 ms. Mar 4 01:09:21.217117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:09:21.222323 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:09:21.222774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:09:21.234739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:09:21.420126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:09:21.443033 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:09:21.731634 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:09:21.731634 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:09:21.731634 kubelet[2147]: I0304 01:09:21.730225 2147 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:09:22.182923 kubelet[2147]: I0304 01:09:22.182260 2147 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 01:09:22.182923 kubelet[2147]: I0304 01:09:22.182322 2147 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:09:22.183353 kubelet[2147]: I0304 01:09:22.183302 2147 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:09:22.183353 kubelet[2147]: I0304 01:09:22.183349 2147 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:09:22.183781 kubelet[2147]: I0304 01:09:22.183704 2147 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:09:22.196712 kubelet[2147]: E0304 01:09:22.196588 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:09:22.200519 kubelet[2147]: I0304 01:09:22.200471 2147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:09:22.212064 kubelet[2147]: E0304 01:09:22.211910 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:09:22.212064 kubelet[2147]: I0304 01:09:22.212062 2147 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:09:22.224269 kubelet[2147]: I0304 01:09:22.224158 2147 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:09:22.227601 kubelet[2147]: I0304 01:09:22.227503 2147 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:09:22.228412 kubelet[2147]: I0304 01:09:22.227556 2147 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:09:22.228543 kubelet[2147]: I0304 01:09:22.228354 2147 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:09:22.228543 kubelet[2147]: I0304 01:09:22.228437 2147 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 01:09:22.228828 kubelet[2147]: I0304 01:09:22.228740 2147 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:09:22.231439 kubelet[2147]: I0304 01:09:22.231272 2147 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:09:22.232256 kubelet[2147]: I0304 01:09:22.232172 2147 kubelet.go:475] "Attempting to sync node with API server" Mar 4 01:09:22.232326 kubelet[2147]: I0304 01:09:22.232268 2147 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:09:22.233341 kubelet[2147]: I0304 01:09:22.233270 2147 kubelet.go:387] "Adding apiserver pod source" Mar 4 01:09:22.233471 kubelet[2147]: I0304 01:09:22.233423 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:09:22.236476 kubelet[2147]: E0304 01:09:22.236429 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:09:22.239069 kubelet[2147]: E0304 01:09:22.237263 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:09:22.239690 kubelet[2147]: I0304 01:09:22.239585 2147 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:09:22.241014 kubelet[2147]: I0304 01:09:22.240929 2147 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:09:22.241136 kubelet[2147]: I0304 01:09:22.241094 2147 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:09:22.242032 kubelet[2147]: W0304 01:09:22.241915 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:09:22.249497 kubelet[2147]: I0304 01:09:22.249455 2147 server.go:1262] "Started kubelet" Mar 4 01:09:22.251309 kubelet[2147]: I0304 01:09:22.251228 2147 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:09:22.253214 kubelet[2147]: I0304 01:09:22.253173 2147 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:09:22.253651 kubelet[2147]: I0304 01:09:22.253592 2147 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:09:22.253771 kubelet[2147]: I0304 01:09:22.251862 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:09:22.254593 kubelet[2147]: I0304 01:09:22.254538 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:09:22.257102 kubelet[2147]: E0304 01:09:22.255481 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997e163880e17d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:09:22.249351549 +0000 UTC m=+0.593718632,LastTimestamp:2026-03-04 01:09:22.249351549 +0000 UTC m=+0.593718632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:09:22.257866 kubelet[2147]: I0304 01:09:22.257826 2147 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:09:22.258581 kubelet[2147]: E0304 01:09:22.258346 2147 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:09:22.260416 kubelet[2147]: I0304 01:09:22.258928 2147 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:09:22.260416 kubelet[2147]: I0304 01:09:22.259115 2147 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:09:22.260416 kubelet[2147]: I0304 01:09:22.259494 2147 server.go:310] "Adding debug handlers to kubelet server" Mar 4 01:09:22.261725 kubelet[2147]: E0304 01:09:22.261596 2147 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:09:22.262005 kubelet[2147]: E0304 01:09:22.261898 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Mar 4 01:09:22.262272 kubelet[2147]: I0304 01:09:22.262159 2147 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:09:22.267448 kubelet[2147]: I0304 01:09:22.266899 2147 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 01:09:22.268582 kubelet[2147]: I0304 01:09:22.268510 2147 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:09:22.268965 kubelet[2147]: I0304 01:09:22.268947 2147 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:09:22.274171 kubelet[2147]: E0304 01:09:22.273539 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:09:22.576010 kubelet[2147]: E0304 01:09:22.574042 2147 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:09:22.596444 kubelet[2147]: E0304 01:09:22.596025 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Mar 4 01:09:22.633063 kubelet[2147]: I0304 01:09:22.632926 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:09:22.634457 kubelet[2147]: I0304 01:09:22.633724 2147 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:09:22.634457 kubelet[2147]: I0304 01:09:22.633741 2147 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:09:22.634457 kubelet[2147]: I0304 01:09:22.633759 2147 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:09:22.635577 kubelet[2147]: I0304 01:09:22.635532 2147 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:09:22.635640 kubelet[2147]: I0304 01:09:22.635608 2147 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 01:09:22.635739 kubelet[2147]: I0304 01:09:22.635721 2147 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 01:09:22.635990 kubelet[2147]: E0304 01:09:22.635928 2147 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:09:22.637342 kubelet[2147]: E0304 01:09:22.637283 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:09:22.641491 kubelet[2147]: I0304 01:09:22.639301 2147 policy_none.go:49] "None policy: Start" Mar 4 01:09:22.641491 kubelet[2147]: I0304 01:09:22.639527 2147 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:09:22.641491 kubelet[2147]: I0304 01:09:22.639598 2147 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:09:22.642643 kubelet[2147]: I0304 01:09:22.642580 2147 policy_none.go:47] "Start" Mar 4 01:09:22.656459 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 01:09:22.674574 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 01:09:22.679997 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 01:09:22.686450 kubelet[2147]: E0304 01:09:22.686291 2147 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:09:22.698719 kubelet[2147]: E0304 01:09:22.698085 2147 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:09:22.699311 kubelet[2147]: I0304 01:09:22.699164 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:09:22.699311 kubelet[2147]: I0304 01:09:22.699241 2147 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:09:22.700059 kubelet[2147]: I0304 01:09:22.699871 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:09:22.701855 kubelet[2147]: E0304 01:09:22.701780 2147 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:09:22.701908 kubelet[2147]: E0304 01:09:22.701870 2147 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:09:22.750786 systemd[1]: Created slice kubepods-burstable-podfdabcb4f9a54e808b93697b61073a033.slice - libcontainer container kubepods-burstable-podfdabcb4f9a54e808b93697b61073a033.slice. Mar 4 01:09:22.762230 kubelet[2147]: E0304 01:09:22.762051 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:09:22.767063 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 4 01:09:22.770473 kubelet[2147]: E0304 01:09:22.770322 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:09:22.774108 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 4 01:09:22.776721 kubelet[2147]: E0304 01:09:22.776588 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:09:22.787426 kubelet[2147]: I0304 01:09:22.787195 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdabcb4f9a54e808b93697b61073a033-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdabcb4f9a54e808b93697b61073a033\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:22.787426 kubelet[2147]: I0304 01:09:22.787330 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdabcb4f9a54e808b93697b61073a033-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fdabcb4f9a54e808b93697b61073a033\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:22.787640 kubelet[2147]: I0304 01:09:22.787457 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:22.787640 kubelet[2147]: I0304 01:09:22.787528 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:22.787640 kubelet[2147]: I0304 01:09:22.787554 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:22.787640 kubelet[2147]: I0304 01:09:22.787578 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:22.787640 kubelet[2147]: I0304 01:09:22.787603 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:22.787856 kubelet[2147]: I0304 01:09:22.787629 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdabcb4f9a54e808b93697b61073a033-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdabcb4f9a54e808b93697b61073a033\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:22.787856 kubelet[2147]: I0304 01:09:22.787707 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:22.809500 kubelet[2147]: I0304 01:09:22.809162 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:09:22.810346 kubelet[2147]: E0304 01:09:22.810277 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Mar 4 01:09:23.001315 kubelet[2147]: E0304 01:09:22.999242 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Mar 4 01:09:23.015035 kubelet[2147]: I0304 01:09:23.014941 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:09:23.015875 kubelet[2147]: E0304 01:09:23.015771 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Mar 4 01:09:23.067024 kubelet[2147]: E0304 01:09:23.066906 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:23.069461 containerd[1462]: time="2026-03-04T01:09:23.069281575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fdabcb4f9a54e808b93697b61073a033,Namespace:kube-system,Attempt:0,}" Mar 4 01:09:23.074006 kubelet[2147]: E0304 01:09:23.073947 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:23.074852 containerd[1462]: time="2026-03-04T01:09:23.074788217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 4 01:09:23.079793 kubelet[2147]: E0304 01:09:23.079741 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:23.080484 containerd[1462]: time="2026-03-04T01:09:23.080183183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 4 01:09:23.381211 kubelet[2147]: E0304 01:09:23.380765 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:09:23.418804 kubelet[2147]: I0304 01:09:23.418579 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:09:23.419102 kubelet[2147]: E0304 01:09:23.419031 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Mar 4 01:09:23.522460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585131822.mount: Deactivated successfully. Mar 4 01:09:23.530591 containerd[1462]: time="2026-03-04T01:09:23.530498904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:09:23.534280 containerd[1462]: time="2026-03-04T01:09:23.534182335Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 4 01:09:23.535720 containerd[1462]: time="2026-03-04T01:09:23.535625722Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:09:23.537171 containerd[1462]: time="2026-03-04T01:09:23.537084154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:09:23.538498 containerd[1462]: time="2026-03-04T01:09:23.538430905Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:09:23.539952 containerd[1462]: time="2026-03-04T01:09:23.539870908Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:09:23.541278 containerd[1462]: time="2026-03-04T01:09:23.541147627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:09:23.544169 containerd[1462]: time="2026-03-04T01:09:23.544083952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:09:23.548074 containerd[1462]: time="2026-03-04T01:09:23.548001042Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.498217ms" Mar 4 01:09:23.551795 containerd[1462]: time="2026-03-04T01:09:23.551744285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.851614ms" Mar 4 01:09:23.552816 containerd[1462]: time="2026-03-04T01:09:23.552751795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.517437ms" Mar 4 01:09:23.726698 kubelet[2147]: E0304 01:09:23.722955 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:09:23.726698 kubelet[2147]: E0304 01:09:23.723001 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:09:23.765107 kubelet[2147]: E0304 01:09:23.763711 2147 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:09:23.807562 kubelet[2147]: E0304 01:09:23.806898 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" Mar 4 01:09:23.959546 containerd[1462]: time="2026-03-04T01:09:23.958279599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:09:23.959546 containerd[1462]: time="2026-03-04T01:09:23.958450056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:09:23.959546 containerd[1462]: time="2026-03-04T01:09:23.958460695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:23.959546 containerd[1462]: time="2026-03-04T01:09:23.958628036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:23.966281 containerd[1462]: time="2026-03-04T01:09:23.965121470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:09:23.966281 containerd[1462]: time="2026-03-04T01:09:23.965485126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:09:23.966281 containerd[1462]: time="2026-03-04T01:09:23.965631968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:23.966281 containerd[1462]: time="2026-03-04T01:09:23.966070812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:23.975843 containerd[1462]: time="2026-03-04T01:09:23.975738288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:09:23.975934 containerd[1462]: time="2026-03-04T01:09:23.975858731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:09:23.976020 containerd[1462]: time="2026-03-04T01:09:23.975890781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:23.976232 containerd[1462]: time="2026-03-04T01:09:23.976142498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:24.216537 systemd[1]: Started cri-containerd-a6ae18df40ee48637241e314a32fa78c08073df181cde41151930af883ae0fd1.scope - libcontainer container a6ae18df40ee48637241e314a32fa78c08073df181cde41151930af883ae0fd1. Mar 4 01:09:24.222619 systemd[1]: Started cri-containerd-fa3728b46e60b4a7c873b1e772abe290deb918668cce1e1ab559aa43a1f80a12.scope - libcontainer container fa3728b46e60b4a7c873b1e772abe290deb918668cce1e1ab559aa43a1f80a12. Mar 4 01:09:24.223434 kubelet[2147]: I0304 01:09:24.223093 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:09:24.226807 kubelet[2147]: E0304 01:09:24.226624 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Mar 4 01:09:24.238597 kubelet[2147]: E0304 01:09:24.238458 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:09:24.238924 systemd[1]: Started cri-containerd-93dcc2001620c97bb175bebb911bb575ef77eebfab114298c3abd76ff825bfe5.scope - libcontainer container 93dcc2001620c97bb175bebb911bb575ef77eebfab114298c3abd76ff825bfe5. Mar 4 01:09:24.611574 containerd[1462]: time="2026-03-04T01:09:24.611501439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"93dcc2001620c97bb175bebb911bb575ef77eebfab114298c3abd76ff825bfe5\"" Mar 4 01:09:24.615447 kubelet[2147]: E0304 01:09:24.615323 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:24.617587 containerd[1462]: time="2026-03-04T01:09:24.617469644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fdabcb4f9a54e808b93697b61073a033,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6ae18df40ee48637241e314a32fa78c08073df181cde41151930af883ae0fd1\"" Mar 4 01:09:24.619271 kubelet[2147]: E0304 01:09:24.619068 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:24.630178 containerd[1462]: time="2026-03-04T01:09:24.630075117Z" level=info msg="CreateContainer within sandbox \"93dcc2001620c97bb175bebb911bb575ef77eebfab114298c3abd76ff825bfe5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:09:24.633007 containerd[1462]: time="2026-03-04T01:09:24.632953028Z" level=info msg="CreateContainer within sandbox \"a6ae18df40ee48637241e314a32fa78c08073df181cde41151930af883ae0fd1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:09:24.636079 containerd[1462]: time="2026-03-04T01:09:24.636022965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa3728b46e60b4a7c873b1e772abe290deb918668cce1e1ab559aa43a1f80a12\"" Mar 4 01:09:24.636923 kubelet[2147]: E0304 01:09:24.636876 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:24.642596 containerd[1462]: time="2026-03-04T01:09:24.642461575Z" level=info msg="CreateContainer within sandbox \"fa3728b46e60b4a7c873b1e772abe290deb918668cce1e1ab559aa43a1f80a12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:09:24.649790 containerd[1462]: time="2026-03-04T01:09:24.649718814Z" level=info msg="CreateContainer within sandbox \"93dcc2001620c97bb175bebb911bb575ef77eebfab114298c3abd76ff825bfe5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"653ad716440af6045e63dd908fc8057108c965744c7176bab806d1b6d50ca94b\"" Mar 4 01:09:24.650580 containerd[1462]: time="2026-03-04T01:09:24.650517714Z" level=info msg="StartContainer for \"653ad716440af6045e63dd908fc8057108c965744c7176bab806d1b6d50ca94b\"" Mar 4 01:09:24.657786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824312276.mount: Deactivated successfully. Mar 4 01:09:24.665985 containerd[1462]: time="2026-03-04T01:09:24.665917741Z" level=info msg="CreateContainer within sandbox \"a6ae18df40ee48637241e314a32fa78c08073df181cde41151930af883ae0fd1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3ebd3af4e10cc8594efabaa9d68587e81ba778752f4230588de3d4143fb702f0\"" Mar 4 01:09:24.667212 containerd[1462]: time="2026-03-04T01:09:24.667046238Z" level=info msg="StartContainer for \"3ebd3af4e10cc8594efabaa9d68587e81ba778752f4230588de3d4143fb702f0\"" Mar 4 01:09:24.672408 containerd[1462]: time="2026-03-04T01:09:24.671737291Z" level=info msg="CreateContainer within sandbox \"fa3728b46e60b4a7c873b1e772abe290deb918668cce1e1ab559aa43a1f80a12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02d9e7f3dbb36d8e6b59b81f070fd0183d583c0f2de0a319b60e285cec727c48\"" Mar 4 01:09:24.672408 containerd[1462]: time="2026-03-04T01:09:24.672336013Z" level=info msg="StartContainer for \"02d9e7f3dbb36d8e6b59b81f070fd0183d583c0f2de0a319b60e285cec727c48\"" Mar 4 01:09:24.727695 systemd[1]: Started cri-containerd-653ad716440af6045e63dd908fc8057108c965744c7176bab806d1b6d50ca94b.scope - libcontainer container 653ad716440af6045e63dd908fc8057108c965744c7176bab806d1b6d50ca94b. Mar 4 01:09:24.806074 systemd[1]: Started cri-containerd-02d9e7f3dbb36d8e6b59b81f070fd0183d583c0f2de0a319b60e285cec727c48.scope - libcontainer container 02d9e7f3dbb36d8e6b59b81f070fd0183d583c0f2de0a319b60e285cec727c48. Mar 4 01:09:24.819580 systemd[1]: Started cri-containerd-3ebd3af4e10cc8594efabaa9d68587e81ba778752f4230588de3d4143fb702f0.scope - libcontainer container 3ebd3af4e10cc8594efabaa9d68587e81ba778752f4230588de3d4143fb702f0. Mar 4 01:09:24.880284 containerd[1462]: time="2026-03-04T01:09:24.879178343Z" level=info msg="StartContainer for \"653ad716440af6045e63dd908fc8057108c965744c7176bab806d1b6d50ca94b\" returns successfully" Mar 4 01:09:24.896589 containerd[1462]: time="2026-03-04T01:09:24.892924596Z" level=info msg="StartContainer for \"3ebd3af4e10cc8594efabaa9d68587e81ba778752f4230588de3d4143fb702f0\" returns successfully" Mar 4 01:09:24.906586 containerd[1462]: time="2026-03-04T01:09:24.906482888Z" level=info msg="StartContainer for \"02d9e7f3dbb36d8e6b59b81f070fd0183d583c0f2de0a319b60e285cec727c48\" returns successfully" Mar 4 01:09:25.767142 kubelet[2147]: E0304 01:09:25.767046 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:09:25.767765 kubelet[2147]: E0304 01:09:25.767279 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:25.771563 kubelet[2147]: E0304 01:09:25.771496 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:09:25.771811 kubelet[2147]: E0304 01:09:25.771745 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:25.776575 kubelet[2147]: E0304 01:09:25.776512 2147 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:09:25.776815 kubelet[2147]: E0304 01:09:25.776754 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:25.829150 kubelet[2147]: I0304 01:09:25.829066 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:09:26.417024 kubelet[2147]: E0304 01:09:26.416919 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 4 01:09:26.510043 kubelet[2147]: I0304 01:09:26.509503 2147 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:09:26.510043 kubelet[2147]: E0304 01:09:26.509552 2147 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 4 01:09:26.563220 kubelet[2147]: I0304 01:09:26.563094 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:26.582086 kubelet[2147]: E0304 01:09:26.581543 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:26.582086 kubelet[2147]: I0304 01:09:26.581658 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:26.585065 kubelet[2147]: E0304 01:09:26.584981 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:26.585214 kubelet[2147]: I0304 01:09:26.585019 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:26.587698 kubelet[2147]: E0304 01:09:26.587609 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:26.781688 kubelet[2147]: I0304 01:09:26.778956 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:26.781688 kubelet[2147]: I0304 01:09:26.779355 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:26.781688 kubelet[2147]: I0304 01:09:26.779535 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:26.781688 kubelet[2147]: E0304 01:09:26.781595 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:26.781688 kubelet[2147]: E0304 01:09:26.781684 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:26.782504 kubelet[2147]: E0304 01:09:26.781872 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:26.782504 kubelet[2147]: E0304 01:09:26.781900 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:26.782856 kubelet[2147]: E0304 01:09:26.782732 2147 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:26.782929 kubelet[2147]: E0304 01:09:26.782912 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:27.239307 kubelet[2147]: I0304 01:09:27.239046 2147 apiserver.go:52] "Watching apiserver" Mar 4 01:09:27.269194 kubelet[2147]: I0304 01:09:27.269130 2147 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:09:27.845207 kubelet[2147]: I0304 01:09:27.838581 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:27.880282 kubelet[2147]: I0304 01:09:27.846431 2147 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:28.140079 kubelet[2147]: E0304 01:09:28.138689 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:28.143879 kubelet[2147]: E0304 01:09:28.143697 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:28.852072 kubelet[2147]: E0304 01:09:28.849781 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:28.852072 kubelet[2147]: E0304 01:09:28.850276 2147 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:28.871660 update_engine[1446]: I20260304 01:09:28.870153 1446 update_attempter.cc:509] Updating boot flags... Mar 4 01:09:29.136491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2450) Mar 4 01:09:29.439623 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2450) Mar 4 01:09:30.129865 systemd[1]: Reloading requested from client PID 2458 ('systemctl') (unit session-7.scope)... Mar 4 01:09:30.129886 systemd[1]: Reloading... Mar 4 01:09:30.339513 zram_generator::config[2497]: No configuration found. Mar 4 01:09:30.639218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:09:30.776652 systemd[1]: Reloading finished in 645 ms. Mar 4 01:09:30.865957 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:09:30.908188 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:09:30.908780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:09:30.908854 systemd[1]: kubelet.service: Consumed 4.389s CPU time, 130.6M memory peak, 0B memory swap peak. Mar 4 01:09:30.927082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:09:31.263863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:09:31.284993 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:09:31.475444 kubelet[2542]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:09:31.475444 kubelet[2542]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:09:31.475444 kubelet[2542]: I0304 01:09:31.475319 2542 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:09:31.488026 kubelet[2542]: I0304 01:09:31.487954 2542 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 4 01:09:31.488026 kubelet[2542]: I0304 01:09:31.487999 2542 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:09:31.488026 kubelet[2542]: I0304 01:09:31.488024 2542 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:09:31.488026 kubelet[2542]: I0304 01:09:31.488036 2542 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:09:31.488341 kubelet[2542]: I0304 01:09:31.488279 2542 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:09:31.495760 kubelet[2542]: I0304 01:09:31.494997 2542 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:09:31.501217 kubelet[2542]: I0304 01:09:31.501080 2542 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:09:31.505610 kubelet[2542]: E0304 01:09:31.505541 2542 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:09:31.505693 kubelet[2542]: I0304 01:09:31.505634 2542 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:09:31.519514 kubelet[2542]: I0304 01:09:31.516890 2542 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:09:31.519514 kubelet[2542]: I0304 01:09:31.517288 2542 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:09:31.519514 kubelet[2542]: I0304 01:09:31.518038 2542 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:09:31.519514 kubelet[2542]: I0304 01:09:31.518286 2542 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:09:31.519901 kubelet[2542]: I0304 01:09:31.518297 2542 container_manager_linux.go:306] "Creating device plugin manager" Mar 4 01:09:31.519901 kubelet[2542]: I0304 01:09:31.518323 2542 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:09:31.519901 kubelet[2542]: I0304 01:09:31.518651 2542 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:09:31.519901 kubelet[2542]: I0304 01:09:31.518880 2542 kubelet.go:475] "Attempting to sync node with API server" Mar 4 01:09:31.519901 kubelet[2542]: I0304 01:09:31.518900 2542 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:09:31.519901 kubelet[2542]: I0304 01:09:31.518935 2542 kubelet.go:387] "Adding apiserver pod source" Mar 4 01:09:31.519901 kubelet[2542]: I0304 01:09:31.518997 2542 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:09:31.526553 kubelet[2542]: I0304 01:09:31.526419 2542 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:09:31.527042 kubelet[2542]: I0304 01:09:31.526927 2542 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:09:31.527042 kubelet[2542]: I0304 01:09:31.526973 2542 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:09:31.539344 kubelet[2542]: I0304 01:09:31.539120 2542 server.go:1262] "Started kubelet" Mar 4 01:09:31.540319 kubelet[2542]: I0304 01:09:31.540128 2542 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:09:31.543530 kubelet[2542]: I0304 01:09:31.540948 2542 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:09:31.543530 kubelet[2542]: I0304 01:09:31.541091 2542 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:09:31.543530 kubelet[2542]: I0304 01:09:31.541665 2542 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:09:31.543885 kubelet[2542]: I0304 01:09:31.543837 2542 server.go:310] "Adding debug handlers to kubelet server" Mar 4 01:09:31.552490 kubelet[2542]: I0304 01:09:31.551419 2542 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:09:31.552490 kubelet[2542]: I0304 01:09:31.551897 2542 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:09:31.552490 kubelet[2542]: I0304 01:09:31.551912 2542 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 4 01:09:31.552490 kubelet[2542]: I0304 01:09:31.552011 2542 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:09:31.552490 kubelet[2542]: I0304 01:09:31.552223 2542 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:09:31.557794 kubelet[2542]: E0304 01:09:31.557498 2542 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:09:31.559754 kubelet[2542]: I0304 01:09:31.558917 2542 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:09:31.559754 kubelet[2542]: I0304 01:09:31.559027 2542 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:09:31.572234 kubelet[2542]: I0304 01:09:31.572163 2542 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:09:31.586488 kubelet[2542]: I0304 01:09:31.586271 2542 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:09:31.626787 kubelet[2542]: I0304 01:09:31.626028 2542 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:09:31.626787 kubelet[2542]: I0304 01:09:31.626075 2542 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 4 01:09:31.626787 kubelet[2542]: I0304 01:09:31.626111 2542 kubelet.go:2428] "Starting kubelet main sync loop" Mar 4 01:09:31.626787 kubelet[2542]: E0304 01:09:31.626217 2542 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:09:31.678953 kubelet[2542]: I0304 01:09:31.678876 2542 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:09:31.679479 kubelet[2542]: I0304 01:09:31.679272 2542 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:09:31.680291 kubelet[2542]: I0304 01:09:31.679766 2542 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:09:31.680761 kubelet[2542]: I0304 01:09:31.680743 2542 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 01:09:31.680844 kubelet[2542]: I0304 01:09:31.680822 2542 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 01:09:31.680893 kubelet[2542]: I0304 01:09:31.680884 2542 policy_none.go:49] "None policy: Start" Mar 4 01:09:31.680949 kubelet[2542]: I0304 01:09:31.680939 2542 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:09:31.681251 kubelet[2542]: I0304 01:09:31.680985 2542 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:09:31.681251 kubelet[2542]: I0304 01:09:31.681098 2542 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 4 01:09:31.681251 kubelet[2542]: I0304 01:09:31.681107 2542 policy_none.go:47] "Start" Mar 4 01:09:31.690761 kubelet[2542]: E0304 01:09:31.690335 2542 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:09:31.691221 kubelet[2542]: I0304 01:09:31.690933 2542 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:09:31.691221 kubelet[2542]: I0304 01:09:31.691040 2542 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:09:31.691798 kubelet[2542]: I0304 01:09:31.691735 2542 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:09:31.702288 kubelet[2542]: E0304 01:09:31.700866 2542 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:09:31.749885 kubelet[2542]: I0304 01:09:31.749098 2542 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:31.766184 kubelet[2542]: I0304 01:09:31.751179 2542 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:31.766184 kubelet[2542]: I0304 01:09:31.758956 2542 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:31.844557 kubelet[2542]: E0304 01:09:31.837981 2542 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:31.844557 kubelet[2542]: E0304 01:09:31.838102 2542 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:31.844557 kubelet[2542]: I0304 01:09:31.839063 2542 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:09:31.866442 kubelet[2542]: I0304 01:09:31.861304 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:31.866442 kubelet[2542]: I0304 01:09:31.861628 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:31.866442 kubelet[2542]: I0304 01:09:31.861698 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:09:31.866442 kubelet[2542]: I0304 01:09:31.861723 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fdabcb4f9a54e808b93697b61073a033-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdabcb4f9a54e808b93697b61073a033\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:31.866442 kubelet[2542]: I0304 01:09:31.861752 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fdabcb4f9a54e808b93697b61073a033-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fdabcb4f9a54e808b93697b61073a033\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:31.869047 kubelet[2542]: I0304 01:09:31.861776 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:31.869047 kubelet[2542]: I0304 01:09:31.861856 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:31.869047 kubelet[2542]: I0304 01:09:31.861922 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fdabcb4f9a54e808b93697b61073a033-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fdabcb4f9a54e808b93697b61073a033\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:31.869047 kubelet[2542]: I0304 01:09:31.862200 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:31.880263 kubelet[2542]: I0304 01:09:31.880174 2542 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 4 01:09:31.881003 kubelet[2542]: I0304 01:09:31.880522 2542 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:09:32.136257 kubelet[2542]: E0304 01:09:32.135743 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:32.139147 kubelet[2542]: E0304 01:09:32.138818 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:32.139517 kubelet[2542]: E0304 01:09:32.139490 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:32.521842 kubelet[2542]: I0304 01:09:32.521072 2542 apiserver.go:52] "Watching apiserver" Mar 4 01:09:32.553178 kubelet[2542]: I0304 01:09:32.553049 2542 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:09:33.222746 kubelet[2542]: I0304 01:09:33.222032 2542 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:33.225480 kubelet[2542]: E0304 01:09:33.222974 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:33.226676 kubelet[2542]: I0304 01:09:33.226621 2542 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:33.262415 kubelet[2542]: E0304 01:09:33.262269 2542 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:09:33.262695 kubelet[2542]: E0304 01:09:33.262629 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:33.265440 kubelet[2542]: E0304 01:09:33.265340 2542 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 4 01:09:33.266640 kubelet[2542]: E0304 01:09:33.266511 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:33.282559 kubelet[2542]: I0304 01:09:33.282203 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.282104111 podStartE2EDuration="6.282104111s" podCreationTimestamp="2026-03-04 01:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:09:33.266054239 +0000 UTC m=+1.960106636" watchObservedRunningTime="2026-03-04 01:09:33.282104111 +0000 UTC m=+1.976156487" Mar 4 01:09:33.337854 kubelet[2542]: I0304 01:09:33.337734 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.337715657 podStartE2EDuration="2.337715657s" podCreationTimestamp="2026-03-04 01:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:09:33.285861593 +0000 UTC m=+1.979913990" watchObservedRunningTime="2026-03-04 01:09:33.337715657 +0000 UTC m=+2.031768064" Mar 4 01:09:33.573200 kubelet[2542]: I0304 01:09:33.572191 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.572063679 podStartE2EDuration="6.572063679s" podCreationTimestamp="2026-03-04 01:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:09:33.339887898 +0000 UTC m=+2.033940285" watchObservedRunningTime="2026-03-04 01:09:33.572063679 +0000 UTC m=+2.266116086" Mar 4 01:09:34.312128 kubelet[2542]: E0304 01:09:34.311198 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:34.349490 kubelet[2542]: E0304 01:09:34.318671 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:34.349490 kubelet[2542]: E0304 01:09:34.325748 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:35.162761 kubelet[2542]: I0304 01:09:35.162468 2542 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:09:35.164112 kubelet[2542]: I0304 01:09:35.163767 2542 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:09:35.164165 containerd[1462]: time="2026-03-04T01:09:35.163303520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:09:35.266348 kubelet[2542]: E0304 01:09:35.265829 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:35.266348 kubelet[2542]: E0304 01:09:35.266095 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:36.071054 kubelet[2542]: I0304 01:09:36.070684 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q2dx\" (UniqueName: \"kubernetes.io/projected/961d1499-4104-4ea8-a02c-6b1a05c3c9b6-kube-api-access-2q2dx\") pod \"kube-proxy-5s8s2\" (UID: \"961d1499-4104-4ea8-a02c-6b1a05c3c9b6\") " pod="kube-system/kube-proxy-5s8s2" Mar 4 01:09:36.071054 kubelet[2542]: I0304 01:09:36.070746 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/961d1499-4104-4ea8-a02c-6b1a05c3c9b6-xtables-lock\") pod \"kube-proxy-5s8s2\" (UID: \"961d1499-4104-4ea8-a02c-6b1a05c3c9b6\") " pod="kube-system/kube-proxy-5s8s2" Mar 4 01:09:36.071054 kubelet[2542]: I0304 01:09:36.070863 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/961d1499-4104-4ea8-a02c-6b1a05c3c9b6-kube-proxy\") pod \"kube-proxy-5s8s2\" (UID: \"961d1499-4104-4ea8-a02c-6b1a05c3c9b6\") " pod="kube-system/kube-proxy-5s8s2" Mar 4 01:09:36.071054 kubelet[2542]: I0304 01:09:36.070891 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/961d1499-4104-4ea8-a02c-6b1a05c3c9b6-lib-modules\") pod \"kube-proxy-5s8s2\" (UID: \"961d1499-4104-4ea8-a02c-6b1a05c3c9b6\") " pod="kube-system/kube-proxy-5s8s2" Mar 4 01:09:36.071744 systemd[1]: Created slice kubepods-besteffort-pod961d1499_4104_4ea8_a02c_6b1a05c3c9b6.slice - libcontainer container kubepods-besteffort-pod961d1499_4104_4ea8_a02c_6b1a05c3c9b6.slice. Mar 4 01:09:36.270655 kubelet[2542]: E0304 01:09:36.270261 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:36.270655 kubelet[2542]: E0304 01:09:36.270347 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:36.396038 kubelet[2542]: E0304 01:09:36.395970 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:36.397471 containerd[1462]: time="2026-03-04T01:09:36.396993985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5s8s2,Uid:961d1499-4104-4ea8-a02c-6b1a05c3c9b6,Namespace:kube-system,Attempt:0,}" Mar 4 01:09:36.402772 systemd[1]: Created slice kubepods-besteffort-pod6ef42ca6_2ca8_4be0_9f69_7d588047270a.slice - libcontainer container kubepods-besteffort-pod6ef42ca6_2ca8_4be0_9f69_7d588047270a.slice. Mar 4 01:09:36.474943 kubelet[2542]: I0304 01:09:36.474845 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxnf\" (UniqueName: \"kubernetes.io/projected/6ef42ca6-2ca8-4be0-9f69-7d588047270a-kube-api-access-jxxnf\") pod \"tigera-operator-5588576f44-bj8wb\" (UID: \"6ef42ca6-2ca8-4be0-9f69-7d588047270a\") " pod="tigera-operator/tigera-operator-5588576f44-bj8wb" Mar 4 01:09:36.475090 kubelet[2542]: I0304 01:09:36.474975 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ef42ca6-2ca8-4be0-9f69-7d588047270a-var-lib-calico\") pod \"tigera-operator-5588576f44-bj8wb\" (UID: \"6ef42ca6-2ca8-4be0-9f69-7d588047270a\") " pod="tigera-operator/tigera-operator-5588576f44-bj8wb" Mar 4 01:09:36.508482 containerd[1462]: time="2026-03-04T01:09:36.508052836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:09:36.508482 containerd[1462]: time="2026-03-04T01:09:36.508203105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:09:36.508482 containerd[1462]: time="2026-03-04T01:09:36.508214617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:36.508482 containerd[1462]: time="2026-03-04T01:09:36.508312629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:36.739639 containerd[1462]: time="2026-03-04T01:09:36.732739405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-bj8wb,Uid:6ef42ca6-2ca8-4be0-9f69-7d588047270a,Namespace:tigera-operator,Attempt:0,}" Mar 4 01:09:37.114322 containerd[1462]: time="2026-03-04T01:09:37.092128346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:09:37.114322 containerd[1462]: time="2026-03-04T01:09:37.107611455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:09:37.114322 containerd[1462]: time="2026-03-04T01:09:37.107749442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:37.128700 containerd[1462]: time="2026-03-04T01:09:37.117137671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:37.167694 systemd[1]: Started cri-containerd-e9ac85fe89b27e92bd89910f80cbe50be763b3073a6b7bc2fcbed898058cafce.scope - libcontainer container e9ac85fe89b27e92bd89910f80cbe50be763b3073a6b7bc2fcbed898058cafce. Mar 4 01:09:37.232672 systemd[1]: Started cri-containerd-801306e5513f5acd46b7f59091de46ab1dfb90a490eb4ce3256f7f1bd0590b80.scope - libcontainer container 801306e5513f5acd46b7f59091de46ab1dfb90a490eb4ce3256f7f1bd0590b80. Mar 4 01:09:37.260694 containerd[1462]: time="2026-03-04T01:09:37.260590070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5s8s2,Uid:961d1499-4104-4ea8-a02c-6b1a05c3c9b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9ac85fe89b27e92bd89910f80cbe50be763b3073a6b7bc2fcbed898058cafce\"" Mar 4 01:09:37.262060 kubelet[2542]: E0304 01:09:37.261941 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:37.270105 containerd[1462]: time="2026-03-04T01:09:37.269967005Z" level=info msg="CreateContainer within sandbox \"e9ac85fe89b27e92bd89910f80cbe50be763b3073a6b7bc2fcbed898058cafce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:09:37.477914 containerd[1462]: time="2026-03-04T01:09:37.451823648Z" level=info msg="CreateContainer within sandbox \"e9ac85fe89b27e92bd89910f80cbe50be763b3073a6b7bc2fcbed898058cafce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9b9c7ebcd639a8c96b63b4d4d35621d705460976b154f7f8f6d704dc1dd5a4e8\"" Mar 4 01:09:37.544629 containerd[1462]: time="2026-03-04T01:09:37.544188722Z" level=info msg="StartContainer for \"9b9c7ebcd639a8c96b63b4d4d35621d705460976b154f7f8f6d704dc1dd5a4e8\"" Mar 4 01:09:37.582298 containerd[1462]: time="2026-03-04T01:09:37.581627887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-bj8wb,Uid:6ef42ca6-2ca8-4be0-9f69-7d588047270a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"801306e5513f5acd46b7f59091de46ab1dfb90a490eb4ce3256f7f1bd0590b80\"" Mar 4 01:09:37.614135 containerd[1462]: time="2026-03-04T01:09:37.613493479Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 01:09:37.671184 systemd[1]: Started cri-containerd-9b9c7ebcd639a8c96b63b4d4d35621d705460976b154f7f8f6d704dc1dd5a4e8.scope - libcontainer container 9b9c7ebcd639a8c96b63b4d4d35621d705460976b154f7f8f6d704dc1dd5a4e8. Mar 4 01:09:37.767067 containerd[1462]: time="2026-03-04T01:09:37.766875482Z" level=info msg="StartContainer for \"9b9c7ebcd639a8c96b63b4d4d35621d705460976b154f7f8f6d704dc1dd5a4e8\" returns successfully" Mar 4 01:09:38.279689 kubelet[2542]: E0304 01:09:38.279629 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:38.307957 kubelet[2542]: I0304 01:09:38.306292 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5s8s2" podStartSLOduration=2.306274317 podStartE2EDuration="2.306274317s" podCreationTimestamp="2026-03-04 01:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:09:38.305810359 +0000 UTC m=+6.999862735" watchObservedRunningTime="2026-03-04 01:09:38.306274317 +0000 UTC m=+7.000326694" Mar 4 01:09:38.409901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998185409.mount: Deactivated successfully. Mar 4 01:09:39.305341 kubelet[2542]: E0304 01:09:39.290823 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:40.367827 kubelet[2542]: E0304 01:09:40.367188 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:40.608341 containerd[1462]: time="2026-03-04T01:09:40.608129321Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:40.609651 containerd[1462]: time="2026-03-04T01:09:40.609523055Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 4 01:09:40.611323 containerd[1462]: time="2026-03-04T01:09:40.611235464Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:40.616215 containerd[1462]: time="2026-03-04T01:09:40.616156788Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:40.617589 containerd[1462]: time="2026-03-04T01:09:40.617483582Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.00387073s" Mar 4 01:09:40.617660 containerd[1462]: time="2026-03-04T01:09:40.617599829Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 4 01:09:40.624444 containerd[1462]: time="2026-03-04T01:09:40.624194062Z" level=info msg="CreateContainer within sandbox \"801306e5513f5acd46b7f59091de46ab1dfb90a490eb4ce3256f7f1bd0590b80\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 01:09:40.645819 containerd[1462]: time="2026-03-04T01:09:40.645735599Z" level=info msg="CreateContainer within sandbox \"801306e5513f5acd46b7f59091de46ab1dfb90a490eb4ce3256f7f1bd0590b80\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"960c2c04de8fd162ab18bccc620ce1eee50495f5efae802b45e3129f922d1e9e\"" Mar 4 01:09:40.648140 containerd[1462]: time="2026-03-04T01:09:40.647995079Z" level=info msg="StartContainer for \"960c2c04de8fd162ab18bccc620ce1eee50495f5efae802b45e3129f922d1e9e\"" Mar 4 01:09:40.742643 systemd[1]: Started cri-containerd-960c2c04de8fd162ab18bccc620ce1eee50495f5efae802b45e3129f922d1e9e.scope - libcontainer container 960c2c04de8fd162ab18bccc620ce1eee50495f5efae802b45e3129f922d1e9e. Mar 4 01:09:40.811627 containerd[1462]: time="2026-03-04T01:09:40.811474389Z" level=info msg="StartContainer for \"960c2c04de8fd162ab18bccc620ce1eee50495f5efae802b45e3129f922d1e9e\" returns successfully" Mar 4 01:09:41.320469 kubelet[2542]: E0304 01:09:41.320242 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:47.244653 sudo[1635]: pam_unix(sudo:session): session closed for user root Mar 4 01:09:47.251907 sshd[1632]: pam_unix(sshd:session): session closed for user core Mar 4 01:09:47.262024 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:55102.service: Deactivated successfully. Mar 4 01:09:47.273238 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:09:47.278105 systemd[1]: session-7.scope: Consumed 14.487s CPU time, 161.4M memory peak, 0B memory swap peak. Mar 4 01:09:47.279769 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:09:47.284453 systemd-logind[1444]: Removed session 7. Mar 4 01:09:50.047043 kubelet[2542]: I0304 01:09:50.046859 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-bj8wb" podStartSLOduration=11.040829212 podStartE2EDuration="14.046808762s" podCreationTimestamp="2026-03-04 01:09:36 +0000 UTC" firstStartedPulling="2026-03-04 01:09:37.61287569 +0000 UTC m=+6.306928067" lastFinishedPulling="2026-03-04 01:09:40.618855241 +0000 UTC m=+9.312907617" observedRunningTime="2026-03-04 01:09:41.343755573 +0000 UTC m=+10.037807960" watchObservedRunningTime="2026-03-04 01:09:50.046808762 +0000 UTC m=+18.740861139" Mar 4 01:09:50.170292 systemd[1]: Created slice kubepods-besteffort-pod9c0920aa_98d9_40d4_9371_c44cfc8d33d7.slice - libcontainer container kubepods-besteffort-pod9c0920aa_98d9_40d4_9371_c44cfc8d33d7.slice. Mar 4 01:09:50.182308 systemd[1]: Created slice kubepods-besteffort-podf15be8ad_e691_4f5c_b081_6e44323e88b3.slice - libcontainer container kubepods-besteffort-podf15be8ad_e691_4f5c_b081_6e44323e88b3.slice. Mar 4 01:09:50.229856 kubelet[2542]: E0304 01:09:50.229808 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:09:50.241004 kubelet[2542]: I0304 01:09:50.240956 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c0920aa-98d9-40d4-9371-c44cfc8d33d7-typha-certs\") pod \"calico-typha-84768548dc-lf2b5\" (UID: \"9c0920aa-98d9-40d4-9371-c44cfc8d33d7\") " pod="calico-system/calico-typha-84768548dc-lf2b5" Mar 4 01:09:50.241275 kubelet[2542]: I0304 01:09:50.241257 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/75938000-508f-451c-bf35-9cc1d786b69d-registration-dir\") pod \"csi-node-driver-76qbn\" (UID: \"75938000-508f-451c-bf35-9cc1d786b69d\") " pod="calico-system/csi-node-driver-76qbn" Mar 4 01:09:50.241686 kubelet[2542]: I0304 01:09:50.241469 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-sys-fs\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241686 kubelet[2542]: I0304 01:09:50.241497 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmph\" (UniqueName: \"kubernetes.io/projected/9c0920aa-98d9-40d4-9371-c44cfc8d33d7-kube-api-access-pbmph\") pod \"calico-typha-84768548dc-lf2b5\" (UID: \"9c0920aa-98d9-40d4-9371-c44cfc8d33d7\") " pod="calico-system/calico-typha-84768548dc-lf2b5" Mar 4 01:09:50.241686 kubelet[2542]: I0304 01:09:50.241555 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f15be8ad-e691-4f5c-b081-6e44323e88b3-node-certs\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241686 kubelet[2542]: I0304 01:09:50.241570 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f15be8ad-e691-4f5c-b081-6e44323e88b3-tigera-ca-bundle\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241686 kubelet[2542]: I0304 01:09:50.241583 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-var-run-calico\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241847 kubelet[2542]: I0304 01:09:50.241596 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-xtables-lock\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241847 kubelet[2542]: I0304 01:09:50.241637 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-cni-net-dir\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241847 kubelet[2542]: I0304 01:09:50.241667 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-var-lib-calico\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241847 kubelet[2542]: I0304 01:09:50.241739 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9v5\" (UniqueName: \"kubernetes.io/projected/f15be8ad-e691-4f5c-b081-6e44323e88b3-kube-api-access-mx9v5\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241847 kubelet[2542]: I0304 01:09:50.241820 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/75938000-508f-451c-bf35-9cc1d786b69d-socket-dir\") pod \"csi-node-driver-76qbn\" (UID: \"75938000-508f-451c-bf35-9cc1d786b69d\") " pod="calico-system/csi-node-driver-76qbn" Mar 4 01:09:50.241954 kubelet[2542]: I0304 01:09:50.241873 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/75938000-508f-451c-bf35-9cc1d786b69d-varrun\") pod \"csi-node-driver-76qbn\" (UID: \"75938000-508f-451c-bf35-9cc1d786b69d\") " pod="calico-system/csi-node-driver-76qbn" Mar 4 01:09:50.241954 kubelet[2542]: I0304 01:09:50.241904 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-cni-bin-dir\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.241954 kubelet[2542]: I0304 01:09:50.241933 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bprzf\" (UniqueName: \"kubernetes.io/projected/75938000-508f-451c-bf35-9cc1d786b69d-kube-api-access-bprzf\") pod \"csi-node-driver-76qbn\" (UID: \"75938000-508f-451c-bf35-9cc1d786b69d\") " pod="calico-system/csi-node-driver-76qbn" Mar 4 01:09:50.242018 kubelet[2542]: I0304 01:09:50.241957 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-lib-modules\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.242018 kubelet[2542]: I0304 01:09:50.241983 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-policysync\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.242063 kubelet[2542]: I0304 01:09:50.242031 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c0920aa-98d9-40d4-9371-c44cfc8d33d7-tigera-ca-bundle\") pod \"calico-typha-84768548dc-lf2b5\" (UID: \"9c0920aa-98d9-40d4-9371-c44cfc8d33d7\") " pod="calico-system/calico-typha-84768548dc-lf2b5" Mar 4 01:09:50.242063 kubelet[2542]: I0304 01:09:50.242057 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-bpffs\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.244574 kubelet[2542]: I0304 01:09:50.242081 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-flexvol-driver-host\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.245222 kubelet[2542]: I0304 01:09:50.244627 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-nodeproc\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.245222 kubelet[2542]: I0304 01:09:50.244729 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75938000-508f-451c-bf35-9cc1d786b69d-kubelet-dir\") pod \"csi-node-driver-76qbn\" (UID: \"75938000-508f-451c-bf35-9cc1d786b69d\") " pod="calico-system/csi-node-driver-76qbn" Mar 4 01:09:50.245222 kubelet[2542]: I0304 01:09:50.244754 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f15be8ad-e691-4f5c-b081-6e44323e88b3-cni-log-dir\") pod \"calico-node-xb8p2\" (UID: \"f15be8ad-e691-4f5c-b081-6e44323e88b3\") " pod="calico-system/calico-node-xb8p2" Mar 4 01:09:50.358546 kubelet[2542]: E0304 01:09:50.358115 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:09:50.358546 kubelet[2542]: W0304 01:09:50.358165 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:09:50.358546 kubelet[2542]: E0304 01:09:50.358213 2542 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:09:50.373320 kubelet[2542]: E0304 01:09:50.373255 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:09:50.373665 kubelet[2542]: W0304 01:09:50.373533 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:09:50.373786 kubelet[2542]: E0304 01:09:50.373750 2542 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:09:50.374771 kubelet[2542]: E0304 01:09:50.374583 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:09:50.374771 kubelet[2542]: W0304 01:09:50.374598 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:09:50.374771 kubelet[2542]: E0304 01:09:50.374610 2542 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:09:50.375402 kubelet[2542]: E0304 01:09:50.375314 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:09:50.375454 kubelet[2542]: W0304 01:09:50.375433 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:09:50.375483 kubelet[2542]: E0304 01:09:50.375457 2542 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:09:50.377183 kubelet[2542]: E0304 01:09:50.377036 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:09:50.377183 kubelet[2542]: W0304 01:09:50.377059 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:09:50.377706 kubelet[2542]: E0304 01:09:50.377311 2542 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:09:50.378447 kubelet[2542]: E0304 01:09:50.378316 2542 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:09:50.378447 kubelet[2542]: W0304 01:09:50.378439 2542 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:09:50.378566 kubelet[2542]: E0304 01:09:50.378462 2542 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:09:50.486262 kubelet[2542]: E0304 01:09:50.485907 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:50.487661 containerd[1462]: time="2026-03-04T01:09:50.487549532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84768548dc-lf2b5,Uid:9c0920aa-98d9-40d4-9371-c44cfc8d33d7,Namespace:calico-system,Attempt:0,}" Mar 4 01:09:50.498255 containerd[1462]: time="2026-03-04T01:09:50.498006161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xb8p2,Uid:f15be8ad-e691-4f5c-b081-6e44323e88b3,Namespace:calico-system,Attempt:0,}" Mar 4 01:09:50.542044 containerd[1462]: time="2026-03-04T01:09:50.541107526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:09:50.542044 containerd[1462]: time="2026-03-04T01:09:50.541440707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:09:50.542044 containerd[1462]: time="2026-03-04T01:09:50.541477085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:50.542044 containerd[1462]: time="2026-03-04T01:09:50.541600795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:50.558274 containerd[1462]: time="2026-03-04T01:09:50.558092412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:09:50.559271 containerd[1462]: time="2026-03-04T01:09:50.559209815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:09:50.561560 containerd[1462]: time="2026-03-04T01:09:50.561405525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:50.563178 containerd[1462]: time="2026-03-04T01:09:50.561563790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:09:50.576804 systemd[1]: Started cri-containerd-5f344cdaab3eaeb881f1d615af6da31d5e3c31833c17d1071fd7733ac33e56e2.scope - libcontainer container 5f344cdaab3eaeb881f1d615af6da31d5e3c31833c17d1071fd7733ac33e56e2. Mar 4 01:09:50.597894 systemd[1]: Started cri-containerd-f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94.scope - libcontainer container f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94. Mar 4 01:09:50.647550 containerd[1462]: time="2026-03-04T01:09:50.647457853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xb8p2,Uid:f15be8ad-e691-4f5c-b081-6e44323e88b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\"" Mar 4 01:09:50.653926 containerd[1462]: time="2026-03-04T01:09:50.653051136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 01:09:50.656689 containerd[1462]: time="2026-03-04T01:09:50.656611613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84768548dc-lf2b5,Uid:9c0920aa-98d9-40d4-9371-c44cfc8d33d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f344cdaab3eaeb881f1d615af6da31d5e3c31833c17d1071fd7733ac33e56e2\"" Mar 4 01:09:50.657834 kubelet[2542]: E0304 01:09:50.657772 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:51.477184 containerd[1462]: time="2026-03-04T01:09:51.475116691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:51.485235 containerd[1462]: time="2026-03-04T01:09:51.484011835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 4 01:09:51.487121 containerd[1462]: time="2026-03-04T01:09:51.487012860Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:51.493997 containerd[1462]: time="2026-03-04T01:09:51.493765584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:51.499131 containerd[1462]: time="2026-03-04T01:09:51.499021600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 845.93107ms" Mar 4 01:09:51.499131 containerd[1462]: time="2026-03-04T01:09:51.499105867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 4 01:09:51.508037 containerd[1462]: time="2026-03-04T01:09:51.507996311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 01:09:51.515638 containerd[1462]: time="2026-03-04T01:09:51.515468143Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 01:09:51.546152 containerd[1462]: time="2026-03-04T01:09:51.546066654Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3\"" Mar 4 01:09:51.547818 containerd[1462]: time="2026-03-04T01:09:51.547655113Z" level=info msg="StartContainer for \"63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3\"" Mar 4 01:09:51.609911 systemd[1]: Started cri-containerd-63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3.scope - libcontainer container 63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3. Mar 4 01:09:51.680280 kubelet[2542]: E0304 01:09:51.679837 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:09:51.833271 containerd[1462]: time="2026-03-04T01:09:51.832901844Z" level=info msg="StartContainer for \"63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3\" returns successfully" Mar 4 01:09:51.856768 systemd[1]: cri-containerd-63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3.scope: Deactivated successfully. Mar 4 01:09:51.968712 containerd[1462]: time="2026-03-04T01:09:51.968029214Z" level=info msg="shim disconnected" id=63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3 namespace=k8s.io Mar 4 01:09:51.968712 containerd[1462]: time="2026-03-04T01:09:51.968234236Z" level=warning msg="cleaning up after shim disconnected" id=63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3 namespace=k8s.io Mar 4 01:09:51.968712 containerd[1462]: time="2026-03-04T01:09:51.968251758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:09:52.371443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63de804c39122b0053f4e8c5d339cf25dc10dbf8d26a3a2bbef6ff9ace5fe0b3-rootfs.mount: Deactivated successfully. Mar 4 01:09:53.634929 kubelet[2542]: E0304 01:09:53.634268 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:09:53.990321 containerd[1462]: time="2026-03-04T01:09:53.989183470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:53.991827 containerd[1462]: time="2026-03-04T01:09:53.991741459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 4 01:09:53.993895 containerd[1462]: time="2026-03-04T01:09:53.993799836Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:53.996336 containerd[1462]: time="2026-03-04T01:09:53.996268914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:09:53.997009 containerd[1462]: time="2026-03-04T01:09:53.996942873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.488907036s" Mar 4 01:09:53.997009 containerd[1462]: time="2026-03-04T01:09:53.996997915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 4 01:09:53.998450 containerd[1462]: time="2026-03-04T01:09:53.998328788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 01:09:54.014205 containerd[1462]: time="2026-03-04T01:09:54.014135140Z" level=info msg="CreateContainer within sandbox \"5f344cdaab3eaeb881f1d615af6da31d5e3c31833c17d1071fd7733ac33e56e2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 01:09:54.037098 containerd[1462]: time="2026-03-04T01:09:54.037000665Z" level=info msg="CreateContainer within sandbox \"5f344cdaab3eaeb881f1d615af6da31d5e3c31833c17d1071fd7733ac33e56e2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"69dffdb6c3c94494aba5e81ab60f0e71979765a05cbb5efac0e7c242b2e215c0\"" Mar 4 01:09:54.038376 containerd[1462]: time="2026-03-04T01:09:54.038250275Z" level=info msg="StartContainer for \"69dffdb6c3c94494aba5e81ab60f0e71979765a05cbb5efac0e7c242b2e215c0\"" Mar 4 01:09:54.090692 systemd[1]: Started cri-containerd-69dffdb6c3c94494aba5e81ab60f0e71979765a05cbb5efac0e7c242b2e215c0.scope - libcontainer container 69dffdb6c3c94494aba5e81ab60f0e71979765a05cbb5efac0e7c242b2e215c0. Mar 4 01:09:54.164347 containerd[1462]: time="2026-03-04T01:09:54.164248285Z" level=info msg="StartContainer for \"69dffdb6c3c94494aba5e81ab60f0e71979765a05cbb5efac0e7c242b2e215c0\" returns successfully" Mar 4 01:09:54.415646 kubelet[2542]: E0304 01:09:54.415436 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:55.634476 kubelet[2542]: E0304 01:09:55.634034 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:09:55.665097 kubelet[2542]: I0304 01:09:55.641537 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:09:55.667618 kubelet[2542]: E0304 01:09:55.667222 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:09:57.627620 kubelet[2542]: E0304 01:09:57.627334 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:09:59.628458 kubelet[2542]: E0304 01:09:59.627021 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:09:59.984560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728093031.mount: Deactivated successfully. Mar 4 01:10:00.296140 containerd[1462]: time="2026-03-04T01:10:00.295896951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:00.297991 containerd[1462]: time="2026-03-04T01:10:00.297915762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 4 01:10:00.299594 containerd[1462]: time="2026-03-04T01:10:00.299545441Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:00.306182 containerd[1462]: time="2026-03-04T01:10:00.306099411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:00.312825 containerd[1462]: time="2026-03-04T01:10:00.312712581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.314207036s" Mar 4 01:10:00.312825 containerd[1462]: time="2026-03-04T01:10:00.312782422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 4 01:10:00.320010 containerd[1462]: time="2026-03-04T01:10:00.319978736Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 01:10:00.370096 containerd[1462]: time="2026-03-04T01:10:00.369945531Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42\"" Mar 4 01:10:00.371067 containerd[1462]: time="2026-03-04T01:10:00.370964381Z" level=info msg="StartContainer for \"bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42\"" Mar 4 01:10:00.450659 systemd[1]: Started cri-containerd-bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42.scope - libcontainer container bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42. Mar 4 01:10:00.496013 containerd[1462]: time="2026-03-04T01:10:00.495950189Z" level=info msg="StartContainer for \"bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42\" returns successfully" Mar 4 01:10:00.572680 systemd[1]: cri-containerd-bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42.scope: Deactivated successfully. Mar 4 01:10:00.637165 containerd[1462]: time="2026-03-04T01:10:00.637070273Z" level=info msg="shim disconnected" id=bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42 namespace=k8s.io Mar 4 01:10:00.637165 containerd[1462]: time="2026-03-04T01:10:00.637121288Z" level=warning msg="cleaning up after shim disconnected" id=bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42 namespace=k8s.io Mar 4 01:10:00.637165 containerd[1462]: time="2026-03-04T01:10:00.637130816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:10:00.672222 kubelet[2542]: I0304 01:10:00.672028 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84768548dc-lf2b5" podStartSLOduration=7.332439183 podStartE2EDuration="10.672014162s" podCreationTimestamp="2026-03-04 01:09:50 +0000 UTC" firstStartedPulling="2026-03-04 01:09:50.658463373 +0000 UTC m=+19.352515750" lastFinishedPulling="2026-03-04 01:09:53.998038352 +0000 UTC m=+22.692090729" observedRunningTime="2026-03-04 01:09:54.443441583 +0000 UTC m=+23.137493970" watchObservedRunningTime="2026-03-04 01:10:00.672014162 +0000 UTC m=+29.366066540" Mar 4 01:10:00.984979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc70204960c67c63ae4169461f5b179bee2da6b5e407af9dd2cb5cbc10400d42-rootfs.mount: Deactivated successfully. Mar 4 01:10:01.627343 kubelet[2542]: E0304 01:10:01.627229 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:10:01.665241 containerd[1462]: time="2026-03-04T01:10:01.665156999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 01:10:03.406755 containerd[1462]: time="2026-03-04T01:10:03.406674840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:03.408154 containerd[1462]: time="2026-03-04T01:10:03.408107443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 4 01:10:03.410007 containerd[1462]: time="2026-03-04T01:10:03.409898372Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:03.412869 containerd[1462]: time="2026-03-04T01:10:03.412815871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:03.413612 containerd[1462]: time="2026-03-04T01:10:03.413542326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.74830151s" Mar 4 01:10:03.413612 containerd[1462]: time="2026-03-04T01:10:03.413599353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 4 01:10:03.419442 containerd[1462]: time="2026-03-04T01:10:03.419317596Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 01:10:03.442732 containerd[1462]: time="2026-03-04T01:10:03.442638572Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94\"" Mar 4 01:10:03.446469 containerd[1462]: time="2026-03-04T01:10:03.446348726Z" level=info msg="StartContainer for \"1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94\"" Mar 4 01:10:03.504835 systemd[1]: Started cri-containerd-1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94.scope - libcontainer container 1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94. Mar 4 01:10:03.549651 containerd[1462]: time="2026-03-04T01:10:03.549461285Z" level=info msg="StartContainer for \"1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94\" returns successfully" Mar 4 01:10:03.632662 kubelet[2542]: E0304 01:10:03.632557 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76qbn" podUID="75938000-508f-451c-bf35-9cc1d786b69d" Mar 4 01:10:04.256849 systemd[1]: cri-containerd-1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94.scope: Deactivated successfully. Mar 4 01:10:04.302975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94-rootfs.mount: Deactivated successfully. Mar 4 01:10:04.324953 kubelet[2542]: I0304 01:10:04.324900 2542 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 4 01:10:04.344262 containerd[1462]: time="2026-03-04T01:10:04.344180351Z" level=info msg="shim disconnected" id=1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94 namespace=k8s.io Mar 4 01:10:04.344262 containerd[1462]: time="2026-03-04T01:10:04.344253857Z" level=warning msg="cleaning up after shim disconnected" id=1719376068988c3cd32d4b6bea0c1ff00614cbef80ef2573d5d1e04cfb516b94 namespace=k8s.io Mar 4 01:10:04.344262 containerd[1462]: time="2026-03-04T01:10:04.344264508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:10:04.395714 systemd[1]: Created slice kubepods-burstable-podcca7a0a8_cc4f_4da2_beb7_9f56a3aae463.slice - libcontainer container kubepods-burstable-podcca7a0a8_cc4f_4da2_beb7_9f56a3aae463.slice. Mar 4 01:10:04.408913 systemd[1]: Created slice kubepods-burstable-pod92b3426b_65a8_45ba_9289_43631575f549.slice - libcontainer container kubepods-burstable-pod92b3426b_65a8_45ba_9289_43631575f549.slice. Mar 4 01:10:04.418018 systemd[1]: Created slice kubepods-besteffort-podfbc800a6_d75a_4cc1_8f1f_76421c8e840a.slice - libcontainer container kubepods-besteffort-podfbc800a6_d75a_4cc1_8f1f_76421c8e840a.slice. Mar 4 01:10:04.429522 systemd[1]: Created slice kubepods-besteffort-pod449200de_32ce_4f0d_8102_55cd4a726350.slice - libcontainer container kubepods-besteffort-pod449200de_32ce_4f0d_8102_55cd4a726350.slice. Mar 4 01:10:04.437131 systemd[1]: Created slice kubepods-besteffort-pod93ede0dd_a20d_4275_9bed_8f0735634773.slice - libcontainer container kubepods-besteffort-pod93ede0dd_a20d_4275_9bed_8f0735634773.slice. Mar 4 01:10:04.440159 kubelet[2542]: I0304 01:10:04.439767 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210ae5c1-a8ed-43d0-af95-d0b548ed6ccf-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-pbgz5\" (UID: \"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf\") " pod="calico-system/goldmane-cccfbd5cf-pbgz5" Mar 4 01:10:04.440159 kubelet[2542]: I0304 01:10:04.439804 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/449200de-32ce-4f0d-8102-55cd4a726350-tigera-ca-bundle\") pod \"calico-kube-controllers-5d656676db-z9tks\" (UID: \"449200de-32ce-4f0d-8102-55cd4a726350\") " pod="calico-system/calico-kube-controllers-5d656676db-z9tks" Mar 4 01:10:04.440159 kubelet[2542]: I0304 01:10:04.439821 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbfr\" (UniqueName: \"kubernetes.io/projected/03774ad1-dd13-4278-9a53-7bcbb871098c-kube-api-access-2hbfr\") pod \"whisker-67c44bcbf7-rr89v\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " pod="calico-system/whisker-67c44bcbf7-rr89v" Mar 4 01:10:04.440159 kubelet[2542]: I0304 01:10:04.439837 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210ae5c1-a8ed-43d0-af95-d0b548ed6ccf-config\") pod \"goldmane-cccfbd5cf-pbgz5\" (UID: \"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf\") " pod="calico-system/goldmane-cccfbd5cf-pbgz5" Mar 4 01:10:04.440159 kubelet[2542]: I0304 01:10:04.439852 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/93ede0dd-a20d-4275-9bed-8f0735634773-calico-apiserver-certs\") pod \"calico-apiserver-55f64764bb-9wz8h\" (UID: \"93ede0dd-a20d-4275-9bed-8f0735634773\") " pod="calico-system/calico-apiserver-55f64764bb-9wz8h" Mar 4 01:10:04.440454 kubelet[2542]: I0304 01:10:04.439869 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/210ae5c1-a8ed-43d0-af95-d0b548ed6ccf-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-pbgz5\" (UID: \"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf\") " pod="calico-system/goldmane-cccfbd5cf-pbgz5" Mar 4 01:10:04.440454 kubelet[2542]: I0304 01:10:04.439912 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ptk\" (UniqueName: \"kubernetes.io/projected/210ae5c1-a8ed-43d0-af95-d0b548ed6ccf-kube-api-access-59ptk\") pod \"goldmane-cccfbd5cf-pbgz5\" (UID: \"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf\") " pod="calico-system/goldmane-cccfbd5cf-pbgz5" Mar 4 01:10:04.440454 kubelet[2542]: I0304 01:10:04.439947 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sncwd\" (UniqueName: \"kubernetes.io/projected/449200de-32ce-4f0d-8102-55cd4a726350-kube-api-access-sncwd\") pod \"calico-kube-controllers-5d656676db-z9tks\" (UID: \"449200de-32ce-4f0d-8102-55cd4a726350\") " pod="calico-system/calico-kube-controllers-5d656676db-z9tks" Mar 4 01:10:04.440454 kubelet[2542]: I0304 01:10:04.439964 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fbc800a6-d75a-4cc1-8f1f-76421c8e840a-calico-apiserver-certs\") pod \"calico-apiserver-55f64764bb-8ltws\" (UID: \"fbc800a6-d75a-4cc1-8f1f-76421c8e840a\") " pod="calico-system/calico-apiserver-55f64764bb-8ltws" Mar 4 01:10:04.440454 kubelet[2542]: I0304 01:10:04.439979 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7bwq\" (UniqueName: \"kubernetes.io/projected/fbc800a6-d75a-4cc1-8f1f-76421c8e840a-kube-api-access-z7bwq\") pod \"calico-apiserver-55f64764bb-8ltws\" (UID: \"fbc800a6-d75a-4cc1-8f1f-76421c8e840a\") " pod="calico-system/calico-apiserver-55f64764bb-8ltws" Mar 4 01:10:04.440619 kubelet[2542]: I0304 01:10:04.439994 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92b3426b-65a8-45ba-9289-43631575f549-config-volume\") pod \"coredns-66bc5c9577-98m57\" (UID: \"92b3426b-65a8-45ba-9289-43631575f549\") " pod="kube-system/coredns-66bc5c9577-98m57" Mar 4 01:10:04.440619 kubelet[2542]: I0304 01:10:04.440010 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fd5v\" (UniqueName: \"kubernetes.io/projected/92b3426b-65a8-45ba-9289-43631575f549-kube-api-access-5fd5v\") pod \"coredns-66bc5c9577-98m57\" (UID: \"92b3426b-65a8-45ba-9289-43631575f549\") " pod="kube-system/coredns-66bc5c9577-98m57" Mar 4 01:10:04.440619 kubelet[2542]: I0304 01:10:04.440040 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cca7a0a8-cc4f-4da2-beb7-9f56a3aae463-config-volume\") pod \"coredns-66bc5c9577-kkp6j\" (UID: \"cca7a0a8-cc4f-4da2-beb7-9f56a3aae463\") " pod="kube-system/coredns-66bc5c9577-kkp6j" Mar 4 01:10:04.440619 kubelet[2542]: I0304 01:10:04.440052 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-nginx-config\") pod \"whisker-67c44bcbf7-rr89v\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " pod="calico-system/whisker-67c44bcbf7-rr89v" Mar 4 01:10:04.440619 kubelet[2542]: I0304 01:10:04.440092 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-ca-bundle\") pod \"whisker-67c44bcbf7-rr89v\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " pod="calico-system/whisker-67c44bcbf7-rr89v" Mar 4 01:10:04.440732 kubelet[2542]: I0304 01:10:04.440118 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqwp9\" (UniqueName: \"kubernetes.io/projected/93ede0dd-a20d-4275-9bed-8f0735634773-kube-api-access-tqwp9\") pod \"calico-apiserver-55f64764bb-9wz8h\" (UID: \"93ede0dd-a20d-4275-9bed-8f0735634773\") " pod="calico-system/calico-apiserver-55f64764bb-9wz8h" Mar 4 01:10:04.440732 kubelet[2542]: I0304 01:10:04.440136 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pzn8\" (UniqueName: \"kubernetes.io/projected/cca7a0a8-cc4f-4da2-beb7-9f56a3aae463-kube-api-access-9pzn8\") pod \"coredns-66bc5c9577-kkp6j\" (UID: \"cca7a0a8-cc4f-4da2-beb7-9f56a3aae463\") " pod="kube-system/coredns-66bc5c9577-kkp6j" Mar 4 01:10:04.440732 kubelet[2542]: I0304 01:10:04.440150 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-backend-key-pair\") pod \"whisker-67c44bcbf7-rr89v\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " pod="calico-system/whisker-67c44bcbf7-rr89v" Mar 4 01:10:04.447128 systemd[1]: Created slice kubepods-besteffort-pod03774ad1_dd13_4278_9a53_7bcbb871098c.slice - libcontainer container kubepods-besteffort-pod03774ad1_dd13_4278_9a53_7bcbb871098c.slice. Mar 4 01:10:04.453264 systemd[1]: Created slice kubepods-besteffort-pod210ae5c1_a8ed_43d0_af95_d0b548ed6ccf.slice - libcontainer container kubepods-besteffort-pod210ae5c1_a8ed_43d0_af95_d0b548ed6ccf.slice. Mar 4 01:10:04.709091 kubelet[2542]: E0304 01:10:04.708945 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:04.709797 containerd[1462]: time="2026-03-04T01:10:04.709697579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kkp6j,Uid:cca7a0a8-cc4f-4da2-beb7-9f56a3aae463,Namespace:kube-system,Attempt:0,}" Mar 4 01:10:04.716896 containerd[1462]: time="2026-03-04T01:10:04.716700518Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 01:10:04.719789 kubelet[2542]: E0304 01:10:04.719715 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:04.723466 containerd[1462]: time="2026-03-04T01:10:04.723423673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98m57,Uid:92b3426b-65a8-45ba-9289-43631575f549,Namespace:kube-system,Attempt:0,}" Mar 4 01:10:04.728082 containerd[1462]: time="2026-03-04T01:10:04.727966594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-8ltws,Uid:fbc800a6-d75a-4cc1-8f1f-76421c8e840a,Namespace:calico-system,Attempt:0,}" Mar 4 01:10:04.736928 containerd[1462]: time="2026-03-04T01:10:04.736792395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d656676db-z9tks,Uid:449200de-32ce-4f0d-8102-55cd4a726350,Namespace:calico-system,Attempt:0,}" Mar 4 01:10:04.743446 containerd[1462]: time="2026-03-04T01:10:04.742935352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-9wz8h,Uid:93ede0dd-a20d-4275-9bed-8f0735634773,Namespace:calico-system,Attempt:0,}" Mar 4 01:10:04.755245 containerd[1462]: time="2026-03-04T01:10:04.755116982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67c44bcbf7-rr89v,Uid:03774ad1-dd13-4278-9a53-7bcbb871098c,Namespace:calico-system,Attempt:0,}" Mar 4 01:10:04.758632 containerd[1462]: time="2026-03-04T01:10:04.758471998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgz5,Uid:210ae5c1-a8ed-43d0-af95-d0b548ed6ccf,Namespace:calico-system,Attempt:0,}" Mar 4 01:10:04.802684 containerd[1462]: time="2026-03-04T01:10:04.802080799Z" level=info msg="CreateContainer within sandbox \"f9bff66b72aec5d21630aa5405177e4163b166ea1cca37c9b5ef03e8bd137c94\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"27c94888c9265432f932ad0b324af79e3e8cd4d2e35bfd4cb60eb429684a00f3\"" Mar 4 01:10:04.804873 containerd[1462]: time="2026-03-04T01:10:04.804088778Z" level=info msg="StartContainer for \"27c94888c9265432f932ad0b324af79e3e8cd4d2e35bfd4cb60eb429684a00f3\"" Mar 4 01:10:04.907183 systemd[1]: Started cri-containerd-27c94888c9265432f932ad0b324af79e3e8cd4d2e35bfd4cb60eb429684a00f3.scope - libcontainer container 27c94888c9265432f932ad0b324af79e3e8cd4d2e35bfd4cb60eb429684a00f3. Mar 4 01:10:05.005800 containerd[1462]: time="2026-03-04T01:10:05.005520982Z" level=error msg="Failed to destroy network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.006858 containerd[1462]: time="2026-03-04T01:10:05.006791903Z" level=error msg="encountered an error cleaning up failed sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.006858 containerd[1462]: time="2026-03-04T01:10:05.006866141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgz5,Uid:210ae5c1-a8ed-43d0-af95-d0b548ed6ccf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.013897 kubelet[2542]: E0304 01:10:05.013857 2542 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.014250 kubelet[2542]: E0304 01:10:05.014178 2542 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-pbgz5" Mar 4 01:10:05.014448 kubelet[2542]: E0304 01:10:05.014335 2542 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-pbgz5" Mar 4 01:10:05.015458 kubelet[2542]: E0304 01:10:05.015330 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-pbgz5_calico-system(210ae5c1-a8ed-43d0-af95-d0b548ed6ccf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-pbgz5_calico-system(210ae5c1-a8ed-43d0-af95-d0b548ed6ccf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-pbgz5" podUID="210ae5c1-a8ed-43d0-af95-d0b548ed6ccf" Mar 4 01:10:05.016581 containerd[1462]: time="2026-03-04T01:10:05.016334783Z" level=error msg="Failed to destroy network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.017910 containerd[1462]: time="2026-03-04T01:10:05.017879905Z" level=error msg="encountered an error cleaning up failed sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.018206 containerd[1462]: time="2026-03-04T01:10:05.018109813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kkp6j,Uid:cca7a0a8-cc4f-4da2-beb7-9f56a3aae463,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.019323 kubelet[2542]: E0304 01:10:05.019075 2542 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.019323 kubelet[2542]: E0304 01:10:05.019111 2542 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kkp6j" Mar 4 01:10:05.019323 kubelet[2542]: E0304 01:10:05.019132 2542 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kkp6j" Mar 4 01:10:05.019561 kubelet[2542]: E0304 01:10:05.019169 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kkp6j_kube-system(cca7a0a8-cc4f-4da2-beb7-9f56a3aae463)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kkp6j_kube-system(cca7a0a8-cc4f-4da2-beb7-9f56a3aae463)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kkp6j" podUID="cca7a0a8-cc4f-4da2-beb7-9f56a3aae463" Mar 4 01:10:05.027585 containerd[1462]: time="2026-03-04T01:10:05.025803809Z" level=info msg="StartContainer for \"27c94888c9265432f932ad0b324af79e3e8cd4d2e35bfd4cb60eb429684a00f3\" returns successfully" Mar 4 01:10:05.039459 containerd[1462]: time="2026-03-04T01:10:05.039220475Z" level=error msg="Failed to destroy network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.040886 containerd[1462]: time="2026-03-04T01:10:05.040857137Z" level=error msg="encountered an error cleaning up failed sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.041530 containerd[1462]: time="2026-03-04T01:10:05.041286618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67c44bcbf7-rr89v,Uid:03774ad1-dd13-4278-9a53-7bcbb871098c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.042636 kubelet[2542]: E0304 01:10:05.042600 2542 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.042915 kubelet[2542]: E0304 01:10:05.042885 2542 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67c44bcbf7-rr89v" Mar 4 01:10:05.043129 kubelet[2542]: E0304 01:10:05.043111 2542 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67c44bcbf7-rr89v" Mar 4 01:10:05.043671 kubelet[2542]: E0304 01:10:05.043642 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67c44bcbf7-rr89v_calico-system(03774ad1-dd13-4278-9a53-7bcbb871098c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67c44bcbf7-rr89v_calico-system(03774ad1-dd13-4278-9a53-7bcbb871098c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67c44bcbf7-rr89v" podUID="03774ad1-dd13-4278-9a53-7bcbb871098c" Mar 4 01:10:05.051741 containerd[1462]: time="2026-03-04T01:10:05.051664684Z" level=error msg="Failed to destroy network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.052249 containerd[1462]: time="2026-03-04T01:10:05.052138738Z" level=error msg="encountered an error cleaning up failed sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.053633 containerd[1462]: time="2026-03-04T01:10:05.053577540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d656676db-z9tks,Uid:449200de-32ce-4f0d-8102-55cd4a726350,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.053888 kubelet[2542]: E0304 01:10:05.053774 2542 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.053888 kubelet[2542]: E0304 01:10:05.053820 2542 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d656676db-z9tks" Mar 4 01:10:05.053888 kubelet[2542]: E0304 01:10:05.053838 2542 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d656676db-z9tks" Mar 4 01:10:05.054057 kubelet[2542]: E0304 01:10:05.053880 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d656676db-z9tks_calico-system(449200de-32ce-4f0d-8102-55cd4a726350)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d656676db-z9tks_calico-system(449200de-32ce-4f0d-8102-55cd4a726350)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d656676db-z9tks" podUID="449200de-32ce-4f0d-8102-55cd4a726350" Mar 4 01:10:05.057226 containerd[1462]: time="2026-03-04T01:10:05.057197193Z" level=error msg="Failed to destroy network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.057532 containerd[1462]: time="2026-03-04T01:10:05.057218999Z" level=error msg="Failed to destroy network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.058077 containerd[1462]: time="2026-03-04T01:10:05.058049393Z" level=error msg="encountered an error cleaning up failed sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.058275 containerd[1462]: time="2026-03-04T01:10:05.058150090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-8ltws,Uid:fbc800a6-d75a-4cc1-8f1f-76421c8e840a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.059130 kubelet[2542]: E0304 01:10:05.058424 2542 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.059130 kubelet[2542]: E0304 01:10:05.058460 2542 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55f64764bb-8ltws" Mar 4 01:10:05.059130 kubelet[2542]: E0304 01:10:05.058475 2542 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55f64764bb-8ltws" Mar 4 01:10:05.059240 containerd[1462]: time="2026-03-04T01:10:05.058973325Z" level=error msg="encountered an error cleaning up failed sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.059240 containerd[1462]: time="2026-03-04T01:10:05.059012378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98m57,Uid:92b3426b-65a8-45ba-9289-43631575f549,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.059290 kubelet[2542]: E0304 01:10:05.058661 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55f64764bb-8ltws_calico-system(fbc800a6-d75a-4cc1-8f1f-76421c8e840a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55f64764bb-8ltws_calico-system(fbc800a6-d75a-4cc1-8f1f-76421c8e840a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55f64764bb-8ltws" podUID="fbc800a6-d75a-4cc1-8f1f-76421c8e840a" Mar 4 01:10:05.059290 kubelet[2542]: E0304 01:10:05.059243 2542 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.059290 kubelet[2542]: E0304 01:10:05.059273 2542 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-98m57" Mar 4 01:10:05.059572 kubelet[2542]: E0304 01:10:05.059288 2542 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-98m57" Mar 4 01:10:05.059572 kubelet[2542]: E0304 01:10:05.059352 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-98m57_kube-system(92b3426b-65a8-45ba-9289-43631575f549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-98m57_kube-system(92b3426b-65a8-45ba-9289-43631575f549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-98m57" podUID="92b3426b-65a8-45ba-9289-43631575f549" Mar 4 01:10:05.059945 containerd[1462]: time="2026-03-04T01:10:05.059772070Z" level=error msg="Failed to destroy network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.060911 containerd[1462]: time="2026-03-04T01:10:05.060448281Z" level=error msg="encountered an error cleaning up failed sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.060911 containerd[1462]: time="2026-03-04T01:10:05.060528551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-9wz8h,Uid:93ede0dd-a20d-4275-9bed-8f0735634773,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.061050 kubelet[2542]: E0304 01:10:05.060764 2542 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:10:05.061050 kubelet[2542]: E0304 01:10:05.060794 2542 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55f64764bb-9wz8h" Mar 4 01:10:05.061050 kubelet[2542]: E0304 01:10:05.060808 2542 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55f64764bb-9wz8h" Mar 4 01:10:05.061239 kubelet[2542]: E0304 01:10:05.060858 2542 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55f64764bb-9wz8h_calico-system(93ede0dd-a20d-4275-9bed-8f0735634773)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55f64764bb-9wz8h_calico-system(93ede0dd-a20d-4275-9bed-8f0735634773)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55f64764bb-9wz8h" podUID="93ede0dd-a20d-4275-9bed-8f0735634773" Mar 4 01:10:05.637764 systemd[1]: Created slice kubepods-besteffort-pod75938000_508f_451c_bf35_9cc1d786b69d.slice - libcontainer container kubepods-besteffort-pod75938000_508f_451c_bf35_9cc1d786b69d.slice. Mar 4 01:10:05.651260 containerd[1462]: time="2026-03-04T01:10:05.651150212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76qbn,Uid:75938000-508f-451c-bf35-9cc1d786b69d,Namespace:calico-system,Attempt:0,}" Mar 4 01:10:05.698442 kubelet[2542]: I0304 01:10:05.698317 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:05.701964 kubelet[2542]: I0304 01:10:05.701622 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:05.709795 kubelet[2542]: I0304 01:10:05.708795 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:05.712220 kubelet[2542]: I0304 01:10:05.712138 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:05.718241 containerd[1462]: time="2026-03-04T01:10:05.718191366Z" level=info msg="StopPodSandbox for \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\"" Mar 4 01:10:05.719924 containerd[1462]: time="2026-03-04T01:10:05.718954460Z" level=info msg="StopPodSandbox for \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\"" Mar 4 01:10:05.720531 containerd[1462]: time="2026-03-04T01:10:05.719251424Z" level=info msg="StopPodSandbox for \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\"" Mar 4 01:10:05.721985 kubelet[2542]: I0304 01:10:05.721895 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:05.722984 containerd[1462]: time="2026-03-04T01:10:05.722696425Z" level=info msg="StopPodSandbox for \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\"" Mar 4 01:10:05.722984 containerd[1462]: time="2026-03-04T01:10:05.722860580Z" level=info msg="Ensure that sandbox 0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be in task-service has been cleanup successfully" Mar 4 01:10:05.722984 containerd[1462]: time="2026-03-04T01:10:05.722963285Z" level=info msg="Ensure that sandbox 4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82 in task-service has been cleanup successfully" Mar 4 01:10:05.725685 containerd[1462]: time="2026-03-04T01:10:05.723257994Z" level=info msg="Ensure that sandbox 4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc in task-service has been cleanup successfully" Mar 4 01:10:05.725685 containerd[1462]: time="2026-03-04T01:10:05.719303651Z" level=info msg="StopPodSandbox for \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\"" Mar 4 01:10:05.725685 containerd[1462]: time="2026-03-04T01:10:05.725345247Z" level=info msg="Ensure that sandbox fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d in task-service has been cleanup successfully" Mar 4 01:10:05.728590 containerd[1462]: time="2026-03-04T01:10:05.728559718Z" level=info msg="Ensure that sandbox c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6 in task-service has been cleanup successfully" Mar 4 01:10:05.747023 kubelet[2542]: I0304 01:10:05.746835 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:05.747964 containerd[1462]: time="2026-03-04T01:10:05.747846457Z" level=info msg="StopPodSandbox for \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\"" Mar 4 01:10:05.748553 containerd[1462]: time="2026-03-04T01:10:05.748476702Z" level=info msg="Ensure that sandbox e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57 in task-service has been cleanup successfully" Mar 4 01:10:05.752466 kubelet[2542]: I0304 01:10:05.749687 2542 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:05.758315 containerd[1462]: time="2026-03-04T01:10:05.758099103Z" level=info msg="StopPodSandbox for \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\"" Mar 4 01:10:05.762199 containerd[1462]: time="2026-03-04T01:10:05.762094844Z" level=info msg="Ensure that sandbox 82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a in task-service has been cleanup successfully" Mar 4 01:10:05.768421 kubelet[2542]: I0304 01:10:05.768163 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xb8p2" podStartSLOduration=3.005986369 podStartE2EDuration="15.768074836s" podCreationTimestamp="2026-03-04 01:09:50 +0000 UTC" firstStartedPulling="2026-03-04 01:09:50.652538611 +0000 UTC m=+19.346590989" lastFinishedPulling="2026-03-04 01:10:03.414627079 +0000 UTC m=+32.108679456" observedRunningTime="2026-03-04 01:10:05.767181428 +0000 UTC m=+34.461233805" watchObservedRunningTime="2026-03-04 01:10:05.768074836 +0000 UTC m=+34.462127223" Mar 4 01:10:05.938699 systemd[1]: run-containerd-runc-k8s.io-27c94888c9265432f932ad0b324af79e3e8cd4d2e35bfd4cb60eb429684a00f3-runc.F2nJWw.mount: Deactivated successfully. Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.415 [INFO][3699] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.415 [INFO][3699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" iface="eth0" netns="/var/run/netns/cni-ad9c63c2-58a4-9ef5-5ef8-179c197554f9" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.416 [INFO][3699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" iface="eth0" netns="/var/run/netns/cni-ad9c63c2-58a4-9ef5-5ef8-179c197554f9" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.420 [INFO][3699] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" iface="eth0" netns="/var/run/netns/cni-ad9c63c2-58a4-9ef5-5ef8-179c197554f9" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.422 [INFO][3699] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.422 [INFO][3699] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.590 [INFO][3818] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.591 [INFO][3818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.591 [INFO][3818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.652 [WARNING][3818] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.652 [INFO][3818] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.670 [INFO][3818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:08.730059 containerd[1462]: 2026-03-04 01:10:08.688 [INFO][3699] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:08.740818 containerd[1462]: time="2026-03-04T01:10:08.740716801Z" level=info msg="TearDown network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\" successfully" Mar 4 01:10:08.741424 containerd[1462]: time="2026-03-04T01:10:08.741268480Z" level=info msg="StopPodSandbox for \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\" returns successfully" Mar 4 01:10:08.742089 systemd[1]: run-netns-cni\x2dad9c63c2\x2d58a4\x2d9ef5\x2d5ef8\x2d179c197554f9.mount: Deactivated successfully. Mar 4 01:10:08.758662 containerd[1462]: time="2026-03-04T01:10:08.758559572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d656676db-z9tks,Uid:449200de-32ce-4f0d-8102-55cd4a726350,Namespace:calico-system,Attempt:1,}" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.548 [INFO][3698] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.549 [INFO][3698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" iface="eth0" netns="/var/run/netns/cni-c2a3e73c-8fab-a9d1-c165-6ed59e70c079" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.551 [INFO][3698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" iface="eth0" netns="/var/run/netns/cni-c2a3e73c-8fab-a9d1-c165-6ed59e70c079" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.555 [INFO][3698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" iface="eth0" netns="/var/run/netns/cni-c2a3e73c-8fab-a9d1-c165-6ed59e70c079" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.555 [INFO][3698] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.555 [INFO][3698] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.732 [INFO][3868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.737 [INFO][3868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.931 [INFO][3868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.947 [WARNING][3868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.947 [INFO][3868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.953 [INFO][3868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:08.981976 containerd[1462]: 2026-03-04 01:10:08.971 [INFO][3698] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:08.988985 containerd[1462]: time="2026-03-04T01:10:08.988867880Z" level=info msg="TearDown network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\" successfully" Mar 4 01:10:08.989581 containerd[1462]: time="2026-03-04T01:10:08.989200241Z" level=info msg="StopPodSandbox for \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\" returns successfully" Mar 4 01:10:08.990344 systemd[1]: run-netns-cni\x2dc2a3e73c\x2d8fab\x2da9d1\x2dc165\x2d6ed59e70c079.mount: Deactivated successfully. Mar 4 01:10:09.009816 containerd[1462]: time="2026-03-04T01:10:09.009728374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgz5,Uid:210ae5c1-a8ed-43d0-af95-d0b548ed6ccf,Namespace:calico-system,Attempt:1,}" Mar 4 01:10:09.010737 systemd-networkd[1387]: cali287e41f94c4: Link UP Mar 4 01:10:09.011712 systemd-networkd[1387]: cali287e41f94c4: Gained carrier Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.459 [INFO][3708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.461 [INFO][3708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" iface="eth0" netns="/var/run/netns/cni-a829c21f-6e95-f59e-15e3-ff07a9d9248b" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.461 [INFO][3708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" iface="eth0" netns="/var/run/netns/cni-a829c21f-6e95-f59e-15e3-ff07a9d9248b" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.466 [INFO][3708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" iface="eth0" netns="/var/run/netns/cni-a829c21f-6e95-f59e-15e3-ff07a9d9248b" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.466 [INFO][3708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.466 [INFO][3708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.825 [INFO][3837] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.844 [INFO][3837] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:08.989 [INFO][3837] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:09.015 [WARNING][3837] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:09.015 [INFO][3837] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:09.023 [INFO][3837] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.077919 containerd[1462]: 2026-03-04 01:10:09.046 [INFO][3708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:09.080419 containerd[1462]: time="2026-03-04T01:10:09.078746158Z" level=info msg="TearDown network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\" successfully" Mar 4 01:10:09.080419 containerd[1462]: time="2026-03-04T01:10:09.078855411Z" level=info msg="StopPodSandbox for \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\" returns successfully" Mar 4 01:10:09.093074 kubelet[2542]: I0304 01:10:09.092936 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.613 [INFO][3742] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.613 [INFO][3742] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" iface="eth0" netns="/var/run/netns/cni-ecb1f2bb-5d4f-026b-c7cb-b182721a3d69" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.615 [INFO][3742] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" iface="eth0" netns="/var/run/netns/cni-ecb1f2bb-5d4f-026b-c7cb-b182721a3d69" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.627 [INFO][3742] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" iface="eth0" netns="/var/run/netns/cni-ecb1f2bb-5d4f-026b-c7cb-b182721a3d69" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.628 [INFO][3742] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.628 [INFO][3742] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.784 [INFO][3880] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.837 [INFO][3880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.962 [INFO][3880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.981 [WARNING][3880] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.981 [INFO][3880] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:08.989 [INFO][3880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.106620 containerd[1462]: 2026-03-04 01:10:09.032 [INFO][3742] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:09.107662 containerd[1462]: time="2026-03-04T01:10:09.107631308Z" level=info msg="TearDown network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\" successfully" Mar 4 01:10:09.107732 containerd[1462]: time="2026-03-04T01:10:09.107713952Z" level=info msg="StopPodSandbox for \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\" returns successfully" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:05.701 [ERROR][3623] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:05.775 [INFO][3623] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--76qbn-eth0 csi-node-driver- calico-system 75938000-508f-451c-bf35-9cc1d786b69d 730 0 2026-03-04 01:09:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-76qbn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali287e41f94c4 [] [] }} ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:05.776 [INFO][3623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-eth0" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.590 [INFO][3740] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" HandleID="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Workload="localhost-k8s-csi--node--driver--76qbn-eth0" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.651 [INFO][3740] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" HandleID="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Workload="localhost-k8s-csi--node--driver--76qbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00018ba60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-76qbn", "timestamp":"2026-03-04 01:10:08.590161084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002cc160)} Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.652 [INFO][3740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.674 [INFO][3740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.674 [INFO][3740] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.727 [INFO][3740] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.805 [INFO][3740] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.868 [INFO][3740] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.876 [INFO][3740] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.880 [INFO][3740] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.880 [INFO][3740] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.885 [INFO][3740] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.902 [INFO][3740] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.929 [INFO][3740] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.929 [INFO][3740] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" host="localhost" Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.929 [INFO][3740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.114068 containerd[1462]: 2026-03-04 01:10:08.929 [INFO][3740] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" HandleID="k8s-pod-network.90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Workload="localhost-k8s-csi--node--driver--76qbn-eth0" Mar 4 01:10:09.115347 containerd[1462]: 2026-03-04 01:10:08.954 [INFO][3623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--76qbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"75938000-508f-451c-bf35-9cc1d786b69d", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-76qbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali287e41f94c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.115347 containerd[1462]: 2026-03-04 01:10:08.955 [INFO][3623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-eth0" Mar 4 01:10:09.115347 containerd[1462]: 2026-03-04 01:10:08.955 [INFO][3623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali287e41f94c4 ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-eth0" Mar 4 01:10:09.115347 containerd[1462]: 2026-03-04 01:10:09.015 [INFO][3623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-eth0" Mar 4 01:10:09.115347 containerd[1462]: 2026-03-04 01:10:09.043 [INFO][3623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--76qbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"75938000-508f-451c-bf35-9cc1d786b69d", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac", Pod:"csi-node-driver-76qbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali287e41f94c4", MAC:"82:c0:f4:1e:83:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.115347 containerd[1462]: 2026-03-04 01:10:09.090 [INFO][3623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac" Namespace="calico-system" Pod="csi-node-driver-76qbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--76qbn-eth0" Mar 4 01:10:09.117999 containerd[1462]: time="2026-03-04T01:10:09.117854865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-9wz8h,Uid:93ede0dd-a20d-4275-9bed-8f0735634773,Namespace:calico-system,Attempt:1,}" Mar 4 01:10:09.161742 kubelet[2542]: I0304 01:10:09.160332 2542 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-nginx-config\") pod \"03774ad1-dd13-4278-9a53-7bcbb871098c\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " Mar 4 01:10:09.161742 kubelet[2542]: I0304 01:10:09.162145 2542 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hbfr\" (UniqueName: \"kubernetes.io/projected/03774ad1-dd13-4278-9a53-7bcbb871098c-kube-api-access-2hbfr\") pod \"03774ad1-dd13-4278-9a53-7bcbb871098c\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " Mar 4 01:10:09.161742 kubelet[2542]: I0304 01:10:09.162248 2542 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-backend-key-pair\") pod \"03774ad1-dd13-4278-9a53-7bcbb871098c\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " Mar 4 01:10:09.161742 kubelet[2542]: I0304 01:10:09.162289 2542 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-ca-bundle\") pod \"03774ad1-dd13-4278-9a53-7bcbb871098c\" (UID: \"03774ad1-dd13-4278-9a53-7bcbb871098c\") " Mar 4 01:10:09.163434 kubelet[2542]: I0304 01:10:09.162417 2542 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "03774ad1-dd13-4278-9a53-7bcbb871098c" (UID: "03774ad1-dd13-4278-9a53-7bcbb871098c"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:10:09.170012 kubelet[2542]: I0304 01:10:09.169669 2542 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "03774ad1-dd13-4278-9a53-7bcbb871098c" (UID: "03774ad1-dd13-4278-9a53-7bcbb871098c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:10:09.176947 kubelet[2542]: I0304 01:10:09.176861 2542 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03774ad1-dd13-4278-9a53-7bcbb871098c-kube-api-access-2hbfr" (OuterVolumeSpecName: "kube-api-access-2hbfr") pod "03774ad1-dd13-4278-9a53-7bcbb871098c" (UID: "03774ad1-dd13-4278-9a53-7bcbb871098c"). InnerVolumeSpecName "kube-api-access-2hbfr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:10:09.185314 kubelet[2542]: I0304 01:10:09.185283 2542 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "03774ad1-dd13-4278-9a53-7bcbb871098c" (UID: "03774ad1-dd13-4278-9a53-7bcbb871098c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.454 [INFO][3686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.454 [INFO][3686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" iface="eth0" netns="/var/run/netns/cni-a1c36997-eef1-bb04-c97f-c7fae3b3592e" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.455 [INFO][3686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" iface="eth0" netns="/var/run/netns/cni-a1c36997-eef1-bb04-c97f-c7fae3b3592e" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.455 [INFO][3686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" iface="eth0" netns="/var/run/netns/cni-a1c36997-eef1-bb04-c97f-c7fae3b3592e" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.458 [INFO][3686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.458 [INFO][3686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.826 [INFO][3832] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:08.846 [INFO][3832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:09.026 [INFO][3832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:09.116 [WARNING][3832] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:09.117 [INFO][3832] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:09.120 [INFO][3832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.229065 containerd[1462]: 2026-03-04 01:10:09.147 [INFO][3686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:09.235418 containerd[1462]: time="2026-03-04T01:10:09.231833362Z" level=info msg="TearDown network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\" successfully" Mar 4 01:10:09.235418 containerd[1462]: time="2026-03-04T01:10:09.231878717Z" level=info msg="StopPodSandbox for \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\" returns successfully" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.677 [INFO][3762] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.679 [INFO][3762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" iface="eth0" netns="/var/run/netns/cni-be8be6c5-cb99-4287-019d-338ed0e2c1bb" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.680 [INFO][3762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" iface="eth0" netns="/var/run/netns/cni-be8be6c5-cb99-4287-019d-338ed0e2c1bb" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.680 [INFO][3762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" iface="eth0" netns="/var/run/netns/cni-be8be6c5-cb99-4287-019d-338ed0e2c1bb" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.681 [INFO][3762] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.681 [INFO][3762] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.867 [INFO][3897] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:08.872 [INFO][3897] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:09.120 [INFO][3897] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:09.139 [WARNING][3897] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:09.139 [INFO][3897] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:09.145 [INFO][3897] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.235418 containerd[1462]: 2026-03-04 01:10:09.168 [INFO][3762] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:09.239803 containerd[1462]: time="2026-03-04T01:10:09.238769108Z" level=info msg="TearDown network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\" successfully" Mar 4 01:10:09.239803 containerd[1462]: time="2026-03-04T01:10:09.239205141Z" level=info msg="StopPodSandbox for \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\" returns successfully" Mar 4 01:10:09.241933 kubelet[2542]: E0304 01:10:09.241320 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:09.247197 containerd[1462]: time="2026-03-04T01:10:09.247095732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kkp6j,Uid:cca7a0a8-cc4f-4da2-beb7-9f56a3aae463,Namespace:kube-system,Attempt:1,}" Mar 4 01:10:09.247515 containerd[1462]: time="2026-03-04T01:10:09.247453514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-8ltws,Uid:fbc800a6-d75a-4cc1-8f1f-76421c8e840a,Namespace:calico-system,Attempt:1,}" Mar 4 01:10:09.264530 kubelet[2542]: I0304 01:10:09.264460 2542 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 4 01:10:09.264694 kubelet[2542]: I0304 01:10:09.264655 2542 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 4 01:10:09.264694 kubelet[2542]: I0304 01:10:09.264670 2542 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/03774ad1-dd13-4278-9a53-7bcbb871098c-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 4 01:10:09.264694 kubelet[2542]: I0304 01:10:09.264678 2542 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2hbfr\" (UniqueName: \"kubernetes.io/projected/03774ad1-dd13-4278-9a53-7bcbb871098c-kube-api-access-2hbfr\") on node \"localhost\" DevicePath \"\"" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.630 [INFO][3684] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.630 [INFO][3684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" iface="eth0" netns="/var/run/netns/cni-3e3d50c3-ff3c-2ab1-9566-bc9ae80e31ce" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.630 [INFO][3684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" iface="eth0" netns="/var/run/netns/cni-3e3d50c3-ff3c-2ab1-9566-bc9ae80e31ce" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.634 [INFO][3684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" iface="eth0" netns="/var/run/netns/cni-3e3d50c3-ff3c-2ab1-9566-bc9ae80e31ce" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.634 [INFO][3684] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.636 [INFO][3684] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.889 [INFO][3882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:08.894 [INFO][3882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:09.146 [INFO][3882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:09.215 [WARNING][3882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:09.215 [INFO][3882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:09.220 [INFO][3882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.272611 containerd[1462]: 2026-03-04 01:10:09.254 [INFO][3684] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:09.273070 containerd[1462]: time="2026-03-04T01:10:09.272950443Z" level=info msg="TearDown network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\" successfully" Mar 4 01:10:09.273070 containerd[1462]: time="2026-03-04T01:10:09.272972194Z" level=info msg="StopPodSandbox for \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\" returns successfully" Mar 4 01:10:09.276948 kubelet[2542]: E0304 01:10:09.276915 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:09.278225 containerd[1462]: time="2026-03-04T01:10:09.278192439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98m57,Uid:92b3426b-65a8-45ba-9289-43631575f549,Namespace:kube-system,Attempt:1,}" Mar 4 01:10:09.360730 containerd[1462]: time="2026-03-04T01:10:09.360148043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:09.360730 containerd[1462]: time="2026-03-04T01:10:09.360279959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:09.360730 containerd[1462]: time="2026-03-04T01:10:09.360297381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.360730 containerd[1462]: time="2026-03-04T01:10:09.360579398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.455631 systemd-networkd[1387]: calie6270715da0: Link UP Mar 4 01:10:09.457653 systemd-networkd[1387]: calie6270715da0: Gained carrier Mar 4 01:10:09.497577 systemd[1]: Started cri-containerd-90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac.scope - libcontainer container 90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac. Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:08.935 [ERROR][3937] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:08.957 [INFO][3937] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0 calico-kube-controllers-5d656676db- calico-system 449200de-32ce-4f0d-8102-55cd4a726350 911 0 2026-03-04 01:09:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d656676db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d656676db-z9tks eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie6270715da0 [] [] }} ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:08.958 [INFO][3937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.183 [INFO][3977] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" HandleID="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.209 [INFO][3977] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" HandleID="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d656676db-z9tks", "timestamp":"2026-03-04 01:10:09.183990334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000352000)} Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.209 [INFO][3977] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.222 [INFO][3977] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.223 [INFO][3977] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.263 [INFO][3977] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.357 [INFO][3977] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.374 [INFO][3977] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.383 [INFO][3977] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.388 [INFO][3977] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.389 [INFO][3977] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.396 [INFO][3977] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3 Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.408 [INFO][3977] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.432 [INFO][3977] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.432 [INFO][3977] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" host="localhost" Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.432 [INFO][3977] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.518971 containerd[1462]: 2026-03-04 01:10:09.432 [INFO][3977] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" HandleID="k8s-pod-network.bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:09.519693 containerd[1462]: 2026-03-04 01:10:09.445 [INFO][3937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0", GenerateName:"calico-kube-controllers-5d656676db-", Namespace:"calico-system", SelfLink:"", UID:"449200de-32ce-4f0d-8102-55cd4a726350", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d656676db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d656676db-z9tks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6270715da0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.519693 containerd[1462]: 2026-03-04 01:10:09.446 [INFO][3937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:09.519693 containerd[1462]: 2026-03-04 01:10:09.446 [INFO][3937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6270715da0 ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:09.519693 containerd[1462]: 2026-03-04 01:10:09.457 [INFO][3937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:09.519693 containerd[1462]: 2026-03-04 01:10:09.479 [INFO][3937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0", GenerateName:"calico-kube-controllers-5d656676db-", Namespace:"calico-system", SelfLink:"", UID:"449200de-32ce-4f0d-8102-55cd4a726350", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d656676db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3", Pod:"calico-kube-controllers-5d656676db-z9tks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6270715da0", MAC:"42:44:a4:d9:5c:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.519693 containerd[1462]: 2026-03-04 01:10:09.511 [INFO][3937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3" Namespace="calico-system" Pod="calico-kube-controllers-5d656676db-z9tks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:09.597755 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:09.610192 systemd-networkd[1387]: cali98671d973b3: Link UP Mar 4 01:10:09.611666 systemd-networkd[1387]: cali98671d973b3: Gained carrier Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.197 [ERROR][3987] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.221 [INFO][3987] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0 goldmane-cccfbd5cf- calico-system 210ae5c1-a8ed-43d0-af95-d0b548ed6ccf 914 0 2026-03-04 01:09:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-pbgz5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali98671d973b3 [] [] }} ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.223 [INFO][3987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.367 [INFO][4029] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" HandleID="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.376 [INFO][4029] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" HandleID="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ed870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-pbgz5", "timestamp":"2026-03-04 01:10:09.367117079 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ea580)} Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.376 [INFO][4029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.438 [INFO][4029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.438 [INFO][4029] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.456 [INFO][4029] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.495 [INFO][4029] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.516 [INFO][4029] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.531 [INFO][4029] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.536 [INFO][4029] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.536 [INFO][4029] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.545 [INFO][4029] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.556 [INFO][4029] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.581 [INFO][4029] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.581 [INFO][4029] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" host="localhost" Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.581 [INFO][4029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.656854 containerd[1462]: 2026-03-04 01:10:09.581 [INFO][4029] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" HandleID="k8s-pod-network.307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:09.657591 containerd[1462]: 2026-03-04 01:10:09.594 [INFO][3987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-pbgz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98671d973b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.657591 containerd[1462]: 2026-03-04 01:10:09.594 [INFO][3987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:09.657591 containerd[1462]: 2026-03-04 01:10:09.594 [INFO][3987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98671d973b3 ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:09.657591 containerd[1462]: 2026-03-04 01:10:09.614 [INFO][3987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:09.657591 containerd[1462]: 2026-03-04 01:10:09.614 [INFO][3987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae", Pod:"goldmane-cccfbd5cf-pbgz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98671d973b3", MAC:"e6:fd:52:42:6a:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.657591 containerd[1462]: 2026-03-04 01:10:09.630 [INFO][3987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pbgz5" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:09.662063 systemd[1]: Removed slice kubepods-besteffort-pod03774ad1_dd13_4278_9a53_7bcbb871098c.slice - libcontainer container kubepods-besteffort-pod03774ad1_dd13_4278_9a53_7bcbb871098c.slice. Mar 4 01:10:09.663689 containerd[1462]: time="2026-03-04T01:10:09.662316521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:09.663689 containerd[1462]: time="2026-03-04T01:10:09.662624765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:09.663689 containerd[1462]: time="2026-03-04T01:10:09.662646035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.663689 containerd[1462]: time="2026-03-04T01:10:09.662801133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.668923 containerd[1462]: time="2026-03-04T01:10:09.667469455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76qbn,Uid:75938000-508f-451c-bf35-9cc1d786b69d,Namespace:calico-system,Attempt:0,} returns sandbox id \"90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac\"" Mar 4 01:10:09.674721 containerd[1462]: time="2026-03-04T01:10:09.674684927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 01:10:09.707386 systemd-networkd[1387]: cali7e1ab948c12: Link UP Mar 4 01:10:09.707737 systemd-networkd[1387]: cali7e1ab948c12: Gained carrier Mar 4 01:10:09.748904 systemd[1]: run-netns-cni\x2da829c21f\x2d6e95\x2df59e\x2d15e3\x2dff07a9d9248b.mount: Deactivated successfully. Mar 4 01:10:09.749539 systemd[1]: run-netns-cni\x2decb1f2bb\x2d5d4f\x2d026b\x2dc7cb\x2db182721a3d69.mount: Deactivated successfully. Mar 4 01:10:09.749639 systemd[1]: run-netns-cni\x2d3e3d50c3\x2dff3c\x2d2ab1\x2d9566\x2dbc9ae80e31ce.mount: Deactivated successfully. Mar 4 01:10:09.749714 systemd[1]: run-netns-cni\x2dbe8be6c5\x2dcb99\x2d4287\x2d019d\x2d338ed0e2c1bb.mount: Deactivated successfully. Mar 4 01:10:09.749787 systemd[1]: run-netns-cni\x2da1c36997\x2deef1\x2dbb04\x2dc97f\x2dc7fae3b3592e.mount: Deactivated successfully. Mar 4 01:10:09.749863 systemd[1]: var-lib-kubelet-pods-03774ad1\x2ddd13\x2d4278\x2d9a53\x2d7bcbb871098c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2hbfr.mount: Deactivated successfully. Mar 4 01:10:09.749943 systemd[1]: var-lib-kubelet-pods-03774ad1\x2ddd13\x2d4278\x2d9a53\x2d7bcbb871098c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.296 [ERROR][4019] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.345 [INFO][4019] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0 calico-apiserver-55f64764bb- calico-system 93ede0dd-a20d-4275-9bed-8f0735634773 915 0 2026-03-04 01:09:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55f64764bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55f64764bb-9wz8h eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7e1ab948c12 [] [] }} ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.345 [INFO][4019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.559 [INFO][4115] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" HandleID="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.573 [INFO][4115] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" HandleID="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000113470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-55f64764bb-9wz8h", "timestamp":"2026-03-04 01:10:09.559828405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000199600)} Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.573 [INFO][4115] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.584 [INFO][4115] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.585 [INFO][4115] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.594 [INFO][4115] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.627 [INFO][4115] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.648 [INFO][4115] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.654 [INFO][4115] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.663 [INFO][4115] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.663 [INFO][4115] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.667 [INFO][4115] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7 Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.675 [INFO][4115] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.693 [INFO][4115] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.693 [INFO][4115] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" host="localhost" Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.693 [INFO][4115] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.769822 containerd[1462]: 2026-03-04 01:10:09.694 [INFO][4115] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" HandleID="k8s-pod-network.5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.775690 containerd[1462]: 2026-03-04 01:10:09.698 [INFO][4019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"93ede0dd-a20d-4275-9bed-8f0735634773", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55f64764bb-9wz8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7e1ab948c12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.775690 containerd[1462]: 2026-03-04 01:10:09.698 [INFO][4019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.775690 containerd[1462]: 2026-03-04 01:10:09.699 [INFO][4019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e1ab948c12 ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.775690 containerd[1462]: 2026-03-04 01:10:09.705 [INFO][4019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.775690 containerd[1462]: 2026-03-04 01:10:09.712 [INFO][4019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"93ede0dd-a20d-4275-9bed-8f0735634773", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7", Pod:"calico-apiserver-55f64764bb-9wz8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7e1ab948c12", MAC:"8a:bd:4f:d1:4c:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.775690 containerd[1462]: 2026-03-04 01:10:09.754 [INFO][4019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-9wz8h" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:09.774923 systemd[1]: Started cri-containerd-bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3.scope - libcontainer container bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3. Mar 4 01:10:09.782251 containerd[1462]: time="2026-03-04T01:10:09.781232161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:09.782645 containerd[1462]: time="2026-03-04T01:10:09.782072298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:09.782645 containerd[1462]: time="2026-03-04T01:10:09.782294693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.783150 containerd[1462]: time="2026-03-04T01:10:09.782805346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.802601 systemd-networkd[1387]: calibd2b6d3abcb: Link UP Mar 4 01:10:09.803814 systemd-networkd[1387]: calibd2b6d3abcb: Gained carrier Mar 4 01:10:09.829604 containerd[1462]: time="2026-03-04T01:10:09.828306856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:09.829604 containerd[1462]: time="2026-03-04T01:10:09.828469188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:09.829604 containerd[1462]: time="2026-03-04T01:10:09.828536284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.829604 containerd[1462]: time="2026-03-04T01:10:09.828683457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.402 [ERROR][4070] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.473 [INFO][4070] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0 calico-apiserver-55f64764bb- calico-system fbc800a6-d75a-4cc1-8f1f-76421c8e840a 917 0 2026-03-04 01:09:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55f64764bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55f64764bb-8ltws eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibd2b6d3abcb [] [] }} ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.473 [INFO][4070] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.605 [INFO][4153] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" HandleID="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.638 [INFO][4153] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" HandleID="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-55f64764bb-8ltws", "timestamp":"2026-03-04 01:10:09.605617153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ecdc0)} Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.638 [INFO][4153] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.694 [INFO][4153] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.694 [INFO][4153] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.709 [INFO][4153] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.726 [INFO][4153] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.742 [INFO][4153] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.750 [INFO][4153] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.754 [INFO][4153] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.754 [INFO][4153] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.759 [INFO][4153] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076 Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.770 [INFO][4153] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.780 [INFO][4153] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.780 [INFO][4153] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" host="localhost" Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.780 [INFO][4153] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.836100 containerd[1462]: 2026-03-04 01:10:09.780 [INFO][4153] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" HandleID="k8s-pod-network.f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.836930 containerd[1462]: 2026-03-04 01:10:09.785 [INFO][4070] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"fbc800a6-d75a-4cc1-8f1f-76421c8e840a", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55f64764bb-8ltws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd2b6d3abcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.836930 containerd[1462]: 2026-03-04 01:10:09.786 [INFO][4070] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.836930 containerd[1462]: 2026-03-04 01:10:09.786 [INFO][4070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd2b6d3abcb ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.836930 containerd[1462]: 2026-03-04 01:10:09.803 [INFO][4070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.836930 containerd[1462]: 2026-03-04 01:10:09.804 [INFO][4070] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"fbc800a6-d75a-4cc1-8f1f-76421c8e840a", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076", Pod:"calico-apiserver-55f64764bb-8ltws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd2b6d3abcb", MAC:"b2:9f:ee:e8:af:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.836930 containerd[1462]: 2026-03-04 01:10:09.821 [INFO][4070] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076" Namespace="calico-system" Pod="calico-apiserver-55f64764bb-8ltws" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:09.844455 systemd[1]: run-containerd-runc-k8s.io-307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae-runc.id9jzb.mount: Deactivated successfully. Mar 4 01:10:09.855326 systemd[1]: Started cri-containerd-307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae.scope - libcontainer container 307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae. Mar 4 01:10:09.867219 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:09.897736 systemd[1]: Started cri-containerd-5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7.scope - libcontainer container 5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7. Mar 4 01:10:09.908090 systemd-networkd[1387]: calib7a547195c3: Link UP Mar 4 01:10:09.910931 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:09.912215 systemd-networkd[1387]: calib7a547195c3: Gained carrier Mar 4 01:10:09.930617 containerd[1462]: time="2026-03-04T01:10:09.930279076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:09.930617 containerd[1462]: time="2026-03-04T01:10:09.930519714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:09.930617 containerd[1462]: time="2026-03-04T01:10:09.930540162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.931045 containerd[1462]: time="2026-03-04T01:10:09.930665016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.449 [ERROR][4088] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.523 [INFO][4088] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--kkp6j-eth0 coredns-66bc5c9577- kube-system cca7a0a8-cc4f-4da2-beb7-9f56a3aae463 912 0 2026-03-04 01:09:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-kkp6j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7a547195c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.524 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.690 [INFO][4173] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" HandleID="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.706 [INFO][4173] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" HandleID="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276470), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-kkp6j", "timestamp":"2026-03-04 01:10:09.690004742 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001122c0)} Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.706 [INFO][4173] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.781 [INFO][4173] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.781 [INFO][4173] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.805 [INFO][4173] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.828 [INFO][4173] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.843 [INFO][4173] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.847 [INFO][4173] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.851 [INFO][4173] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.852 [INFO][4173] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.856 [INFO][4173] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031 Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.868 [INFO][4173] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.882 [INFO][4173] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.883 [INFO][4173] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" host="localhost" Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.883 [INFO][4173] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:09.955513 containerd[1462]: 2026-03-04 01:10:09.883 [INFO][4173] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" HandleID="k8s-pod-network.028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.956085 containerd[1462]: 2026-03-04 01:10:09.899 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kkp6j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cca7a0a8-cc4f-4da2-beb7-9f56a3aae463", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-kkp6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a547195c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.956085 containerd[1462]: 2026-03-04 01:10:09.899 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.956085 containerd[1462]: 2026-03-04 01:10:09.899 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7a547195c3 ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.956085 containerd[1462]: 2026-03-04 01:10:09.913 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.956085 containerd[1462]: 2026-03-04 01:10:09.914 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kkp6j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cca7a0a8-cc4f-4da2-beb7-9f56a3aae463", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031", Pod:"coredns-66bc5c9577-kkp6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a547195c3", MAC:"5a:c7:d3:27:58:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:09.956085 containerd[1462]: 2026-03-04 01:10:09.946 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031" Namespace="kube-system" Pod="coredns-66bc5c9577-kkp6j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:09.963890 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:09.991097 systemd-networkd[1387]: cali24b7845d1da: Link UP Mar 4 01:10:09.993815 systemd-networkd[1387]: cali24b7845d1da: Gained carrier Mar 4 01:10:09.994991 systemd[1]: Started cri-containerd-f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076.scope - libcontainer container f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076. Mar 4 01:10:09.999948 containerd[1462]: time="2026-03-04T01:10:09.999606542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d656676db-z9tks,Uid:449200de-32ce-4f0d-8102-55cd4a726350,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3\"" Mar 4 01:10:10.029421 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:10.038032 containerd[1462]: time="2026-03-04T01:10:10.037995402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pbgz5,Uid:210ae5c1-a8ed-43d0-af95-d0b548ed6ccf,Namespace:calico-system,Attempt:1,} returns sandbox id \"307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae\"" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.443 [ERROR][4057] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.527 [INFO][4057] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--98m57-eth0 coredns-66bc5c9577- kube-system 92b3426b-65a8-45ba-9289-43631575f549 916 0 2026-03-04 01:09:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-98m57 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24b7845d1da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.527 [INFO][4057] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.707 [INFO][4171] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" HandleID="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.732 [INFO][4171] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" HandleID="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-98m57", "timestamp":"2026-03-04 01:10:09.707892176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000392160)} Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.732 [INFO][4171] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.883 [INFO][4171] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.883 [INFO][4171] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.905 [INFO][4171] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.927 [INFO][4171] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.946 [INFO][4171] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.951 [INFO][4171] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.958 [INFO][4171] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.958 [INFO][4171] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.962 [INFO][4171] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6 Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.968 [INFO][4171] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.981 [INFO][4171] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.981 [INFO][4171] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" host="localhost" Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.981 [INFO][4171] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:10.046275 containerd[1462]: 2026-03-04 01:10:09.981 [INFO][4171] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" HandleID="k8s-pod-network.c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:10.046944 containerd[1462]: 2026-03-04 01:10:09.986 [INFO][4057] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--98m57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"92b3426b-65a8-45ba-9289-43631575f549", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-98m57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7845d1da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:10.046944 containerd[1462]: 2026-03-04 01:10:09.987 [INFO][4057] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:10.046944 containerd[1462]: 2026-03-04 01:10:09.987 [INFO][4057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24b7845d1da ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:10.046944 containerd[1462]: 2026-03-04 01:10:09.992 [INFO][4057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:10.046944 containerd[1462]: 2026-03-04 01:10:09.995 [INFO][4057] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--98m57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"92b3426b-65a8-45ba-9289-43631575f549", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6", Pod:"coredns-66bc5c9577-98m57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7845d1da", MAC:"9a:45:43:41:1c:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:10.046944 containerd[1462]: 2026-03-04 01:10:10.032 [INFO][4057] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6" Namespace="kube-system" Pod="coredns-66bc5c9577-98m57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:10.055031 containerd[1462]: time="2026-03-04T01:10:10.054537573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:10.055031 containerd[1462]: time="2026-03-04T01:10:10.054589841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:10.055031 containerd[1462]: time="2026-03-04T01:10:10.054600039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:10.055031 containerd[1462]: time="2026-03-04T01:10:10.054691950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:10.085581 systemd[1]: Started cri-containerd-028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031.scope - libcontainer container 028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031. Mar 4 01:10:10.086078 containerd[1462]: time="2026-03-04T01:10:10.085958652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-9wz8h,Uid:93ede0dd-a20d-4275-9bed-8f0735634773,Namespace:calico-system,Attempt:1,} returns sandbox id \"5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7\"" Mar 4 01:10:10.111085 containerd[1462]: time="2026-03-04T01:10:10.110940904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f64764bb-8ltws,Uid:fbc800a6-d75a-4cc1-8f1f-76421c8e840a,Namespace:calico-system,Attempt:1,} returns sandbox id \"f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076\"" Mar 4 01:10:10.112775 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:10.122002 containerd[1462]: time="2026-03-04T01:10:10.120724255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:10.122002 containerd[1462]: time="2026-03-04T01:10:10.120868653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:10.122002 containerd[1462]: time="2026-03-04T01:10:10.120963842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:10.122002 containerd[1462]: time="2026-03-04T01:10:10.121596762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:10.169609 systemd[1]: Started cri-containerd-c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6.scope - libcontainer container c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6. Mar 4 01:10:10.210674 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:10.225142 containerd[1462]: time="2026-03-04T01:10:10.225035548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kkp6j,Uid:cca7a0a8-cc4f-4da2-beb7-9f56a3aae463,Namespace:kube-system,Attempt:1,} returns sandbox id \"028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031\"" Mar 4 01:10:10.251289 kubelet[2542]: E0304 01:10:10.249174 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:10.286280 containerd[1462]: time="2026-03-04T01:10:10.285695958Z" level=info msg="CreateContainer within sandbox \"028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:10:10.287511 systemd[1]: Created slice kubepods-besteffort-podc62267d4_9a0a_4a6e_972e_8cd3751310a6.slice - libcontainer container kubepods-besteffort-podc62267d4_9a0a_4a6e_972e_8cd3751310a6.slice. Mar 4 01:10:10.331216 containerd[1462]: time="2026-03-04T01:10:10.331160418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98m57,Uid:92b3426b-65a8-45ba-9289-43631575f549,Namespace:kube-system,Attempt:1,} returns sandbox id \"c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6\"" Mar 4 01:10:10.335008 kubelet[2542]: E0304 01:10:10.334024 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:10.339626 containerd[1462]: time="2026-03-04T01:10:10.339344264Z" level=info msg="CreateContainer within sandbox \"028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63eb240bd989e0af514ad3dda57759415c641c56ddf8336cd46dbb2bf566b6b6\"" Mar 4 01:10:10.341057 containerd[1462]: time="2026-03-04T01:10:10.341020648Z" level=info msg="StartContainer for \"63eb240bd989e0af514ad3dda57759415c641c56ddf8336cd46dbb2bf566b6b6\"" Mar 4 01:10:10.348469 containerd[1462]: time="2026-03-04T01:10:10.348327374Z" level=info msg="CreateContainer within sandbox \"c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:10:10.382850 kubelet[2542]: I0304 01:10:10.382771 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c62267d4-9a0a-4a6e-972e-8cd3751310a6-nginx-config\") pod \"whisker-86df568646-6qfmn\" (UID: \"c62267d4-9a0a-4a6e-972e-8cd3751310a6\") " pod="calico-system/whisker-86df568646-6qfmn" Mar 4 01:10:10.383000 kubelet[2542]: I0304 01:10:10.382856 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c62267d4-9a0a-4a6e-972e-8cd3751310a6-whisker-ca-bundle\") pod \"whisker-86df568646-6qfmn\" (UID: \"c62267d4-9a0a-4a6e-972e-8cd3751310a6\") " pod="calico-system/whisker-86df568646-6qfmn" Mar 4 01:10:10.383000 kubelet[2542]: I0304 01:10:10.382896 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c62267d4-9a0a-4a6e-972e-8cd3751310a6-whisker-backend-key-pair\") pod \"whisker-86df568646-6qfmn\" (UID: \"c62267d4-9a0a-4a6e-972e-8cd3751310a6\") " pod="calico-system/whisker-86df568646-6qfmn" Mar 4 01:10:10.383000 kubelet[2542]: I0304 01:10:10.382913 2542 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvck6\" (UniqueName: \"kubernetes.io/projected/c62267d4-9a0a-4a6e-972e-8cd3751310a6-kube-api-access-gvck6\") pod \"whisker-86df568646-6qfmn\" (UID: \"c62267d4-9a0a-4a6e-972e-8cd3751310a6\") " pod="calico-system/whisker-86df568646-6qfmn" Mar 4 01:10:10.393982 systemd[1]: Started cri-containerd-63eb240bd989e0af514ad3dda57759415c641c56ddf8336cd46dbb2bf566b6b6.scope - libcontainer container 63eb240bd989e0af514ad3dda57759415c641c56ddf8336cd46dbb2bf566b6b6. Mar 4 01:10:10.397033 containerd[1462]: time="2026-03-04T01:10:10.396954485Z" level=info msg="CreateContainer within sandbox \"c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd531c2053b402ce06a265942bd7b705664155519a6eda06177d66906d88923e\"" Mar 4 01:10:10.404451 containerd[1462]: time="2026-03-04T01:10:10.404341862Z" level=info msg="StartContainer for \"dd531c2053b402ce06a265942bd7b705664155519a6eda06177d66906d88923e\"" Mar 4 01:10:10.459621 containerd[1462]: time="2026-03-04T01:10:10.459430965Z" level=info msg="StartContainer for \"63eb240bd989e0af514ad3dda57759415c641c56ddf8336cd46dbb2bf566b6b6\" returns successfully" Mar 4 01:10:10.478743 systemd[1]: Started cri-containerd-dd531c2053b402ce06a265942bd7b705664155519a6eda06177d66906d88923e.scope - libcontainer container dd531c2053b402ce06a265942bd7b705664155519a6eda06177d66906d88923e. Mar 4 01:10:10.518742 systemd-networkd[1387]: cali287e41f94c4: Gained IPv6LL Mar 4 01:10:10.541309 containerd[1462]: time="2026-03-04T01:10:10.541091991Z" level=info msg="StartContainer for \"dd531c2053b402ce06a265942bd7b705664155519a6eda06177d66906d88923e\" returns successfully" Mar 4 01:10:10.615197 containerd[1462]: time="2026-03-04T01:10:10.615103260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86df568646-6qfmn,Uid:c62267d4-9a0a-4a6e-972e-8cd3751310a6,Namespace:calico-system,Attempt:0,}" Mar 4 01:10:10.710259 systemd-networkd[1387]: cali98671d973b3: Gained IPv6LL Mar 4 01:10:10.804618 containerd[1462]: time="2026-03-04T01:10:10.803796419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:10.805265 containerd[1462]: time="2026-03-04T01:10:10.804717846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 4 01:10:10.813530 containerd[1462]: time="2026-03-04T01:10:10.813127779Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:10.823044 containerd[1462]: time="2026-03-04T01:10:10.822678782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:10.826567 containerd[1462]: time="2026-03-04T01:10:10.826526417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.151703934s" Mar 4 01:10:10.826769 containerd[1462]: time="2026-03-04T01:10:10.826748391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 4 01:10:10.829654 containerd[1462]: time="2026-03-04T01:10:10.829600804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 01:10:10.837235 containerd[1462]: time="2026-03-04T01:10:10.837183045Z" level=info msg="CreateContainer within sandbox \"90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 01:10:10.844832 systemd-networkd[1387]: cali3082c9567b7: Link UP Mar 4 01:10:10.845270 systemd-networkd[1387]: cali3082c9567b7: Gained carrier Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.692 [ERROR][4573] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.711 [INFO][4573] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86df568646--6qfmn-eth0 whisker-86df568646- calico-system c62267d4-9a0a-4a6e-972e-8cd3751310a6 968 0 2026-03-04 01:10:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86df568646 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86df568646-6qfmn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3082c9567b7 [] [] }} ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.711 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-eth0" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.762 [INFO][4594] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" HandleID="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Workload="localhost-k8s-whisker--86df568646--6qfmn-eth0" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.779 [INFO][4594] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" HandleID="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Workload="localhost-k8s-whisker--86df568646--6qfmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86df568646-6qfmn", "timestamp":"2026-03-04 01:10:10.762815263 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000238160)} Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.779 [INFO][4594] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.779 [INFO][4594] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.779 [INFO][4594] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.786 [INFO][4594] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.794 [INFO][4594] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.803 [INFO][4594] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.807 [INFO][4594] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.815 [INFO][4594] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.815 [INFO][4594] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.818 [INFO][4594] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.824 [INFO][4594] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.837 [INFO][4594] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.837 [INFO][4594] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" host="localhost" Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.837 [INFO][4594] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:10.862831 containerd[1462]: 2026-03-04 01:10:10.837 [INFO][4594] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" HandleID="k8s-pod-network.0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Workload="localhost-k8s-whisker--86df568646--6qfmn-eth0" Mar 4 01:10:10.863689 containerd[1462]: 2026-03-04 01:10:10.841 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86df568646--6qfmn-eth0", GenerateName:"whisker-86df568646-", Namespace:"calico-system", SelfLink:"", UID:"c62267d4-9a0a-4a6e-972e-8cd3751310a6", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 10, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86df568646", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86df568646-6qfmn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3082c9567b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:10.863689 containerd[1462]: 2026-03-04 01:10:10.841 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-eth0" Mar 4 01:10:10.863689 containerd[1462]: 2026-03-04 01:10:10.841 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3082c9567b7 ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-eth0" Mar 4 01:10:10.863689 containerd[1462]: 2026-03-04 01:10:10.844 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-eth0" Mar 4 01:10:10.863689 containerd[1462]: 2026-03-04 01:10:10.846 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86df568646--6qfmn-eth0", GenerateName:"whisker-86df568646-", Namespace:"calico-system", SelfLink:"", UID:"c62267d4-9a0a-4a6e-972e-8cd3751310a6", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 10, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86df568646", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e", Pod:"whisker-86df568646-6qfmn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3082c9567b7", MAC:"ee:9d:02:ee:e5:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:10.863689 containerd[1462]: 2026-03-04 01:10:10.856 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e" Namespace="calico-system" Pod="whisker-86df568646-6qfmn" WorkloadEndpoint="localhost-k8s-whisker--86df568646--6qfmn-eth0" Mar 4 01:10:10.874174 containerd[1462]: time="2026-03-04T01:10:10.873997731Z" level=info msg="CreateContainer within sandbox \"90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2303697fd86ff9c40a11cbad0923a67b8f99efe5bfc02d34076349800bb78996\"" Mar 4 01:10:10.874921 containerd[1462]: time="2026-03-04T01:10:10.874856173Z" level=info msg="StartContainer for \"2303697fd86ff9c40a11cbad0923a67b8f99efe5bfc02d34076349800bb78996\"" Mar 4 01:10:10.904751 containerd[1462]: time="2026-03-04T01:10:10.904206861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:10.904751 containerd[1462]: time="2026-03-04T01:10:10.904254400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:10.904751 containerd[1462]: time="2026-03-04T01:10:10.904264640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:10.904751 containerd[1462]: time="2026-03-04T01:10:10.904352293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:10.927675 systemd[1]: Started cri-containerd-2303697fd86ff9c40a11cbad0923a67b8f99efe5bfc02d34076349800bb78996.scope - libcontainer container 2303697fd86ff9c40a11cbad0923a67b8f99efe5bfc02d34076349800bb78996. Mar 4 01:10:10.936551 systemd[1]: Started cri-containerd-0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e.scope - libcontainer container 0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e. Mar 4 01:10:10.957165 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:10:10.965754 systemd-networkd[1387]: cali7e1ab948c12: Gained IPv6LL Mar 4 01:10:10.997574 containerd[1462]: time="2026-03-04T01:10:10.997534326Z" level=info msg="StartContainer for \"2303697fd86ff9c40a11cbad0923a67b8f99efe5bfc02d34076349800bb78996\" returns successfully" Mar 4 01:10:11.010147 containerd[1462]: time="2026-03-04T01:10:11.010052162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86df568646-6qfmn,Uid:c62267d4-9a0a-4a6e-972e-8cd3751310a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e\"" Mar 4 01:10:11.135606 kubelet[2542]: E0304 01:10:11.135519 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:11.140073 kubelet[2542]: E0304 01:10:11.139959 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:11.149528 kubelet[2542]: I0304 01:10:11.149325 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-98m57" podStartSLOduration=35.149310452 podStartE2EDuration="35.149310452s" podCreationTimestamp="2026-03-04 01:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:10:11.149140695 +0000 UTC m=+39.843193082" watchObservedRunningTime="2026-03-04 01:10:11.149310452 +0000 UTC m=+39.843362829" Mar 4 01:10:11.285684 systemd-networkd[1387]: calib7a547195c3: Gained IPv6LL Mar 4 01:10:11.414752 systemd-networkd[1387]: calie6270715da0: Gained IPv6LL Mar 4 01:10:11.635031 kubelet[2542]: I0304 01:10:11.634681 2542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03774ad1-dd13-4278-9a53-7bcbb871098c" path="/var/lib/kubelet/pods/03774ad1-dd13-4278-9a53-7bcbb871098c/volumes" Mar 4 01:10:11.734719 systemd-networkd[1387]: cali24b7845d1da: Gained IPv6LL Mar 4 01:10:11.798713 systemd-networkd[1387]: calibd2b6d3abcb: Gained IPv6LL Mar 4 01:10:12.155591 kubelet[2542]: E0304 01:10:12.155300 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:12.155923 kubelet[2542]: E0304 01:10:12.155795 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:12.517294 containerd[1462]: time="2026-03-04T01:10:12.517068520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:12.518566 containerd[1462]: time="2026-03-04T01:10:12.518425050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 4 01:10:12.520425 containerd[1462]: time="2026-03-04T01:10:12.520149386Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:12.523108 containerd[1462]: time="2026-03-04T01:10:12.523036670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:12.524149 containerd[1462]: time="2026-03-04T01:10:12.524079780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.694438249s" Mar 4 01:10:12.524149 containerd[1462]: time="2026-03-04T01:10:12.524143248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 4 01:10:12.531445 containerd[1462]: time="2026-03-04T01:10:12.529350746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 01:10:12.539809 containerd[1462]: time="2026-03-04T01:10:12.539646769Z" level=info msg="CreateContainer within sandbox \"bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 01:10:12.562516 containerd[1462]: time="2026-03-04T01:10:12.562297415Z" level=info msg="CreateContainer within sandbox \"bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1822645e871c923aeed686b971c1e370194433621b298f9af35650e5d8fe16af\"" Mar 4 01:10:12.564799 containerd[1462]: time="2026-03-04T01:10:12.563114887Z" level=info msg="StartContainer for \"1822645e871c923aeed686b971c1e370194433621b298f9af35650e5d8fe16af\"" Mar 4 01:10:12.617742 systemd[1]: Started cri-containerd-1822645e871c923aeed686b971c1e370194433621b298f9af35650e5d8fe16af.scope - libcontainer container 1822645e871c923aeed686b971c1e370194433621b298f9af35650e5d8fe16af. Mar 4 01:10:12.720138 containerd[1462]: time="2026-03-04T01:10:12.716747308Z" level=info msg="StartContainer for \"1822645e871c923aeed686b971c1e370194433621b298f9af35650e5d8fe16af\" returns successfully" Mar 4 01:10:12.757775 systemd-networkd[1387]: cali3082c9567b7: Gained IPv6LL Mar 4 01:10:13.173305 kubelet[2542]: E0304 01:10:13.173227 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:13.180943 kubelet[2542]: E0304 01:10:13.174597 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:13.201540 kubelet[2542]: I0304 01:10:13.199034 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kkp6j" podStartSLOduration=37.199014623 podStartE2EDuration="37.199014623s" podCreationTimestamp="2026-03-04 01:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:10:11.182531955 +0000 UTC m=+39.876584332" watchObservedRunningTime="2026-03-04 01:10:13.199014623 +0000 UTC m=+41.893067000" Mar 4 01:10:13.751434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359780008.mount: Deactivated successfully. Mar 4 01:10:14.237268 kubelet[2542]: I0304 01:10:14.237079 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d656676db-z9tks" podStartSLOduration=21.72403529 podStartE2EDuration="24.237063697s" podCreationTimestamp="2026-03-04 01:09:50 +0000 UTC" firstStartedPulling="2026-03-04 01:10:10.012571872 +0000 UTC m=+38.706624250" lastFinishedPulling="2026-03-04 01:10:12.52560028 +0000 UTC m=+41.219652657" observedRunningTime="2026-03-04 01:10:13.199703298 +0000 UTC m=+41.893755675" watchObservedRunningTime="2026-03-04 01:10:14.237063697 +0000 UTC m=+42.931116074" Mar 4 01:10:14.238841 containerd[1462]: time="2026-03-04T01:10:14.238651958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:14.240226 containerd[1462]: time="2026-03-04T01:10:14.240009821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 4 01:10:14.241128 containerd[1462]: time="2026-03-04T01:10:14.241076920Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:14.246356 containerd[1462]: time="2026-03-04T01:10:14.244950215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:14.246356 containerd[1462]: time="2026-03-04T01:10:14.245934420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.716490981s" Mar 4 01:10:14.246356 containerd[1462]: time="2026-03-04T01:10:14.245960819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 4 01:10:14.247316 containerd[1462]: time="2026-03-04T01:10:14.247261341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:10:14.253132 containerd[1462]: time="2026-03-04T01:10:14.253100312Z" level=info msg="CreateContainer within sandbox \"307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 01:10:14.274736 containerd[1462]: time="2026-03-04T01:10:14.274652834Z" level=info msg="CreateContainer within sandbox \"307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"634803a704305ca9d231ed7fa658f36e899bf9c7f59a8fae92c4634af16f137f\"" Mar 4 01:10:14.275165 containerd[1462]: time="2026-03-04T01:10:14.275106538Z" level=info msg="StartContainer for \"634803a704305ca9d231ed7fa658f36e899bf9c7f59a8fae92c4634af16f137f\"" Mar 4 01:10:14.325705 systemd[1]: Started cri-containerd-634803a704305ca9d231ed7fa658f36e899bf9c7f59a8fae92c4634af16f137f.scope - libcontainer container 634803a704305ca9d231ed7fa658f36e899bf9c7f59a8fae92c4634af16f137f. Mar 4 01:10:14.384933 containerd[1462]: time="2026-03-04T01:10:14.384824896Z" level=info msg="StartContainer for \"634803a704305ca9d231ed7fa658f36e899bf9c7f59a8fae92c4634af16f137f\" returns successfully" Mar 4 01:10:16.014927 containerd[1462]: time="2026-03-04T01:10:16.014825657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:16.016445 containerd[1462]: time="2026-03-04T01:10:16.016326917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 4 01:10:16.018668 containerd[1462]: time="2026-03-04T01:10:16.017254407Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:16.020916 containerd[1462]: time="2026-03-04T01:10:16.020856622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:16.021957 containerd[1462]: time="2026-03-04T01:10:16.021872100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.774541289s" Mar 4 01:10:16.021957 containerd[1462]: time="2026-03-04T01:10:16.021936110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:10:16.025849 containerd[1462]: time="2026-03-04T01:10:16.025773416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:10:16.031021 containerd[1462]: time="2026-03-04T01:10:16.030959715Z" level=info msg="CreateContainer within sandbox \"5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:10:16.058709 containerd[1462]: time="2026-03-04T01:10:16.058583860Z" level=info msg="CreateContainer within sandbox \"5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"034b489076859dca9cf2847be04030883d83cc5be91aa9346211f458934a5e02\"" Mar 4 01:10:16.063234 containerd[1462]: time="2026-03-04T01:10:16.063145094Z" level=info msg="StartContainer for \"034b489076859dca9cf2847be04030883d83cc5be91aa9346211f458934a5e02\"" Mar 4 01:10:16.147655 systemd[1]: Started cri-containerd-034b489076859dca9cf2847be04030883d83cc5be91aa9346211f458934a5e02.scope - libcontainer container 034b489076859dca9cf2847be04030883d83cc5be91aa9346211f458934a5e02. Mar 4 01:10:16.214581 containerd[1462]: time="2026-03-04T01:10:16.214051691Z" level=info msg="StartContainer for \"034b489076859dca9cf2847be04030883d83cc5be91aa9346211f458934a5e02\" returns successfully" Mar 4 01:10:16.238465 containerd[1462]: time="2026-03-04T01:10:16.234862968Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:16.238465 containerd[1462]: time="2026-03-04T01:10:16.237618147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 01:10:16.239281 containerd[1462]: time="2026-03-04T01:10:16.239242026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 213.402978ms" Mar 4 01:10:16.239281 containerd[1462]: time="2026-03-04T01:10:16.239269898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:10:16.242089 containerd[1462]: time="2026-03-04T01:10:16.241766535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 01:10:16.246162 containerd[1462]: time="2026-03-04T01:10:16.246035978Z" level=info msg="CreateContainer within sandbox \"f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:10:16.269748 containerd[1462]: time="2026-03-04T01:10:16.269647775Z" level=info msg="CreateContainer within sandbox \"f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"42c3095eee9bf66800ec7e763148cc33b2abe5c4b964fa63ff8fd1ff302e5509\"" Mar 4 01:10:16.271237 containerd[1462]: time="2026-03-04T01:10:16.271186437Z" level=info msg="StartContainer for \"42c3095eee9bf66800ec7e763148cc33b2abe5c4b964fa63ff8fd1ff302e5509\"" Mar 4 01:10:16.338177 systemd[1]: Started cri-containerd-42c3095eee9bf66800ec7e763148cc33b2abe5c4b964fa63ff8fd1ff302e5509.scope - libcontainer container 42c3095eee9bf66800ec7e763148cc33b2abe5c4b964fa63ff8fd1ff302e5509. Mar 4 01:10:16.371726 kubelet[2542]: I0304 01:10:16.371584 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-pbgz5" podStartSLOduration=23.167765688 podStartE2EDuration="27.371561119s" podCreationTimestamp="2026-03-04 01:09:49 +0000 UTC" firstStartedPulling="2026-03-04 01:10:10.043289796 +0000 UTC m=+38.737342173" lastFinishedPulling="2026-03-04 01:10:14.247085227 +0000 UTC m=+42.941137604" observedRunningTime="2026-03-04 01:10:15.19904697 +0000 UTC m=+43.893099357" watchObservedRunningTime="2026-03-04 01:10:16.371561119 +0000 UTC m=+45.065613506" Mar 4 01:10:16.426288 containerd[1462]: time="2026-03-04T01:10:16.426230788Z" level=info msg="StartContainer for \"42c3095eee9bf66800ec7e763148cc33b2abe5c4b964fa63ff8fd1ff302e5509\" returns successfully" Mar 4 01:10:17.049661 systemd[1]: run-containerd-runc-k8s.io-634803a704305ca9d231ed7fa658f36e899bf9c7f59a8fae92c4634af16f137f-runc.WIBGKL.mount: Deactivated successfully. Mar 4 01:10:17.202912 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:58394.service - OpenSSH per-connection server daemon (10.0.0.1:58394). Mar 4 01:10:17.257435 kubelet[2542]: I0304 01:10:17.254580 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-55f64764bb-9wz8h" podStartSLOduration=22.320086963 podStartE2EDuration="28.25456268s" podCreationTimestamp="2026-03-04 01:09:49 +0000 UTC" firstStartedPulling="2026-03-04 01:10:10.08884232 +0000 UTC m=+38.782894697" lastFinishedPulling="2026-03-04 01:10:16.023318037 +0000 UTC m=+44.717370414" observedRunningTime="2026-03-04 01:10:17.237868754 +0000 UTC m=+45.931921141" watchObservedRunningTime="2026-03-04 01:10:17.25456268 +0000 UTC m=+45.948615057" Mar 4 01:10:17.329045 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 58394 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:17.331139 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:17.342565 systemd-logind[1444]: New session 8 of user core. Mar 4 01:10:17.350617 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:10:17.649571 sshd[5100]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:17.656658 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:10:17.657282 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:58394.service: Deactivated successfully. Mar 4 01:10:17.661185 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:10:17.663980 systemd-logind[1444]: Removed session 8. Mar 4 01:10:18.224754 kubelet[2542]: I0304 01:10:18.224667 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:10:18.224754 kubelet[2542]: I0304 01:10:18.224759 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:10:18.623663 containerd[1462]: time="2026-03-04T01:10:18.623545178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:18.624702 containerd[1462]: time="2026-03-04T01:10:18.624135380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 4 01:10:18.625870 containerd[1462]: time="2026-03-04T01:10:18.625791969Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:18.628609 containerd[1462]: time="2026-03-04T01:10:18.628549695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:18.629645 containerd[1462]: time="2026-03-04T01:10:18.629564799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.387771593s" Mar 4 01:10:18.629645 containerd[1462]: time="2026-03-04T01:10:18.629618670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 4 01:10:18.631088 containerd[1462]: time="2026-03-04T01:10:18.631025542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 01:10:18.636440 containerd[1462]: time="2026-03-04T01:10:18.636319555Z" level=info msg="CreateContainer within sandbox \"90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 01:10:18.659964 containerd[1462]: time="2026-03-04T01:10:18.659835449Z" level=info msg="CreateContainer within sandbox \"90346de225ff6d5635e86bdd9d3d89fc508174d28fbe50b45a61abb30a30d3ac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4ced12ee899108fa05cb8aefdd3f825c1cb887c9bf649cb99f3816b1b6ef256e\"" Mar 4 01:10:18.662756 containerd[1462]: time="2026-03-04T01:10:18.662508841Z" level=info msg="StartContainer for \"4ced12ee899108fa05cb8aefdd3f825c1cb887c9bf649cb99f3816b1b6ef256e\"" Mar 4 01:10:18.739749 systemd[1]: Started cri-containerd-4ced12ee899108fa05cb8aefdd3f825c1cb887c9bf649cb99f3816b1b6ef256e.scope - libcontainer container 4ced12ee899108fa05cb8aefdd3f825c1cb887c9bf649cb99f3816b1b6ef256e. Mar 4 01:10:18.797093 containerd[1462]: time="2026-03-04T01:10:18.796967477Z" level=info msg="StartContainer for \"4ced12ee899108fa05cb8aefdd3f825c1cb887c9bf649cb99f3816b1b6ef256e\" returns successfully" Mar 4 01:10:18.951770 kubelet[2542]: I0304 01:10:18.951228 2542 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 01:10:18.952665 kubelet[2542]: I0304 01:10:18.952613 2542 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 01:10:19.251141 kubelet[2542]: I0304 01:10:19.250541 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-76qbn" podStartSLOduration=20.293713426 podStartE2EDuration="29.250524164s" podCreationTimestamp="2026-03-04 01:09:50 +0000 UTC" firstStartedPulling="2026-03-04 01:10:09.673929082 +0000 UTC m=+38.367981460" lastFinishedPulling="2026-03-04 01:10:18.630739821 +0000 UTC m=+47.324792198" observedRunningTime="2026-03-04 01:10:19.248180312 +0000 UTC m=+47.942232699" watchObservedRunningTime="2026-03-04 01:10:19.250524164 +0000 UTC m=+47.944576551" Mar 4 01:10:19.251141 kubelet[2542]: I0304 01:10:19.250646 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-55f64764bb-8ltws" podStartSLOduration=24.124991738 podStartE2EDuration="30.250642074s" podCreationTimestamp="2026-03-04 01:09:49 +0000 UTC" firstStartedPulling="2026-03-04 01:10:10.114892807 +0000 UTC m=+38.808945184" lastFinishedPulling="2026-03-04 01:10:16.240543143 +0000 UTC m=+44.934595520" observedRunningTime="2026-03-04 01:10:17.258090972 +0000 UTC m=+45.952143359" watchObservedRunningTime="2026-03-04 01:10:19.250642074 +0000 UTC m=+47.944694461" Mar 4 01:10:19.276922 containerd[1462]: time="2026-03-04T01:10:19.276824162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:19.278107 containerd[1462]: time="2026-03-04T01:10:19.278060037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 4 01:10:19.279904 containerd[1462]: time="2026-03-04T01:10:19.279829148Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:19.283313 containerd[1462]: time="2026-03-04T01:10:19.283281770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:19.284269 containerd[1462]: time="2026-03-04T01:10:19.284172555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 653.097472ms" Mar 4 01:10:19.284269 containerd[1462]: time="2026-03-04T01:10:19.284202682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 4 01:10:19.293446 containerd[1462]: time="2026-03-04T01:10:19.293279359Z" level=info msg="CreateContainer within sandbox \"0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 01:10:19.310272 containerd[1462]: time="2026-03-04T01:10:19.310128674Z" level=info msg="CreateContainer within sandbox \"0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2ebed4960716941184051319ee69ba9f426b27c46d0c569b110ed5a33ad8f9ec\"" Mar 4 01:10:19.311737 containerd[1462]: time="2026-03-04T01:10:19.311625889Z" level=info msg="StartContainer for \"2ebed4960716941184051319ee69ba9f426b27c46d0c569b110ed5a33ad8f9ec\"" Mar 4 01:10:19.371793 systemd[1]: Started cri-containerd-2ebed4960716941184051319ee69ba9f426b27c46d0c569b110ed5a33ad8f9ec.scope - libcontainer container 2ebed4960716941184051319ee69ba9f426b27c46d0c569b110ed5a33ad8f9ec. Mar 4 01:10:19.445236 containerd[1462]: time="2026-03-04T01:10:19.445116317Z" level=info msg="StartContainer for \"2ebed4960716941184051319ee69ba9f426b27c46d0c569b110ed5a33ad8f9ec\" returns successfully" Mar 4 01:10:19.448865 containerd[1462]: time="2026-03-04T01:10:19.448311147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 01:10:19.654314 kubelet[2542]: I0304 01:10:19.653007 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:10:19.654314 kubelet[2542]: E0304 01:10:19.653352 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:20.237252 kubelet[2542]: E0304 01:10:20.237210 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:21.253081 kernel: calico-node[5292]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 01:10:21.519936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504620741.mount: Deactivated successfully. Mar 4 01:10:21.724246 containerd[1462]: time="2026-03-04T01:10:21.723441768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:21.728338 containerd[1462]: time="2026-03-04T01:10:21.728282829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 4 01:10:21.731053 containerd[1462]: time="2026-03-04T01:10:21.730969070Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:21.734805 containerd[1462]: time="2026-03-04T01:10:21.734772988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:21.745282 containerd[1462]: time="2026-03-04T01:10:21.743715204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.295306955s" Mar 4 01:10:21.745282 containerd[1462]: time="2026-03-04T01:10:21.743756580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 4 01:10:21.846632 containerd[1462]: time="2026-03-04T01:10:21.845867399Z" level=info msg="CreateContainer within sandbox \"0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 01:10:21.874570 containerd[1462]: time="2026-03-04T01:10:21.874462131Z" level=info msg="CreateContainer within sandbox \"0e4deb833537d43815a030f57d0d9113426cafa5acfa4b7b010036a0e7f2833e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5b339831503b3aab6cf7b7f305d52fdaac8340c1f434c5e53f2c5251bdd37441\"" Mar 4 01:10:21.876216 containerd[1462]: time="2026-03-04T01:10:21.876161256Z" level=info msg="StartContainer for \"5b339831503b3aab6cf7b7f305d52fdaac8340c1f434c5e53f2c5251bdd37441\"" Mar 4 01:10:21.900585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176521544.mount: Deactivated successfully. Mar 4 01:10:22.075612 systemd[1]: Started cri-containerd-5b339831503b3aab6cf7b7f305d52fdaac8340c1f434c5e53f2c5251bdd37441.scope - libcontainer container 5b339831503b3aab6cf7b7f305d52fdaac8340c1f434c5e53f2c5251bdd37441. Mar 4 01:10:22.217993 containerd[1462]: time="2026-03-04T01:10:22.217219625Z" level=info msg="StartContainer for \"5b339831503b3aab6cf7b7f305d52fdaac8340c1f434c5e53f2c5251bdd37441\" returns successfully" Mar 4 01:10:22.491861 systemd-networkd[1387]: vxlan.calico: Link UP Mar 4 01:10:22.491874 systemd-networkd[1387]: vxlan.calico: Gained carrier Mar 4 01:10:22.669807 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:40590.service - OpenSSH per-connection server daemon (10.0.0.1:40590). Mar 4 01:10:22.754538 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 40590 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:22.756827 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:22.765953 systemd-logind[1444]: New session 9 of user core. Mar 4 01:10:22.774767 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:10:23.133822 sshd[5422]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:23.138301 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:40590.service: Deactivated successfully. Mar 4 01:10:23.140932 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:10:23.141946 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:10:23.143846 systemd-logind[1444]: Removed session 9. Mar 4 01:10:24.341835 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Mar 4 01:10:28.248201 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:40592.service - OpenSSH per-connection server daemon (10.0.0.1:40592). Mar 4 01:10:28.311487 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 40592 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:28.313605 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:28.320249 systemd-logind[1444]: New session 10 of user core. Mar 4 01:10:28.338703 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:10:29.067028 sshd[5510]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:29.240838 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:40592.service: Deactivated successfully. Mar 4 01:10:29.272475 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:10:33.624626 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:10:33.629821 systemd-logind[1444]: Removed session 10. Mar 4 01:10:33.873850 kubelet[2542]: E0304 01:10:33.873732 2542 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.227s" Mar 4 01:10:33.876550 containerd[1462]: time="2026-03-04T01:10:33.876333241Z" level=info msg="StopPodSandbox for \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\"" Mar 4 01:10:34.091165 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:40552.service - OpenSSH per-connection server daemon (10.0.0.1:40552). Mar 4 01:10:34.163954 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 40552 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:34.187294 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:34.198795 systemd-logind[1444]: New session 11 of user core. Mar 4 01:10:34.207557 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:10:34.407024 sshd[5561]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.233 [WARNING][5552] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0", GenerateName:"calico-kube-controllers-5d656676db-", Namespace:"calico-system", SelfLink:"", UID:"449200de-32ce-4f0d-8102-55cd4a726350", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d656676db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3", Pod:"calico-kube-controllers-5d656676db-z9tks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6270715da0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.235 [INFO][5552] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.236 [INFO][5552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" iface="eth0" netns="" Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.236 [INFO][5552] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.236 [INFO][5552] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.379 [INFO][5575] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.380 [INFO][5575] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.380 [INFO][5575] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.398 [WARNING][5575] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.398 [INFO][5575] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.400 [INFO][5575] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:34.409029 containerd[1462]: 2026-03-04 01:10:34.403 [INFO][5552] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.417852 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:40552.service: Deactivated successfully. Mar 4 01:10:34.419950 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:10:34.421844 containerd[1462]: time="2026-03-04T01:10:34.421763731Z" level=info msg="TearDown network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\" successfully" Mar 4 01:10:34.422472 containerd[1462]: time="2026-03-04T01:10:34.421841306Z" level=info msg="StopPodSandbox for \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\" returns successfully" Mar 4 01:10:34.426704 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:10:34.433807 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:40556.service - OpenSSH per-connection server daemon (10.0.0.1:40556). Mar 4 01:10:34.437852 systemd-logind[1444]: Removed session 11. Mar 4 01:10:34.475179 containerd[1462]: time="2026-03-04T01:10:34.475098150Z" level=info msg="RemovePodSandbox for \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\"" Mar 4 01:10:34.477613 containerd[1462]: time="2026-03-04T01:10:34.477521988Z" level=info msg="Forcibly stopping sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\"" Mar 4 01:10:34.480029 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 40556 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:34.482535 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:34.495750 systemd-logind[1444]: New session 12 of user core. Mar 4 01:10:34.501654 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.531 [WARNING][5608] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0", GenerateName:"calico-kube-controllers-5d656676db-", Namespace:"calico-system", SelfLink:"", UID:"449200de-32ce-4f0d-8102-55cd4a726350", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d656676db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc5e24b6bb813f201e01e367f5ff94a07ee4718de474c2cd8b11321d842e75d3", Pod:"calico-kube-controllers-5d656676db-z9tks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6270715da0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.531 [INFO][5608] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.531 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" iface="eth0" netns="" Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.531 [INFO][5608] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.531 [INFO][5608] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.568 [INFO][5618] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.568 [INFO][5618] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.568 [INFO][5618] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.593 [WARNING][5618] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.595 [INFO][5618] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" HandleID="k8s-pod-network.4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Workload="localhost-k8s-calico--kube--controllers--5d656676db--z9tks-eth0" Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.599 [INFO][5618] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:34.609470 containerd[1462]: 2026-03-04 01:10:34.604 [INFO][5608] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc" Mar 4 01:10:34.613266 containerd[1462]: time="2026-03-04T01:10:34.611158695Z" level=info msg="TearDown network for sandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\" successfully" Mar 4 01:10:34.644277 containerd[1462]: time="2026-03-04T01:10:34.644182510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:10:34.644553 containerd[1462]: time="2026-03-04T01:10:34.644485428Z" level=info msg="RemovePodSandbox \"4b32a037cee7ca2c27a60ddc2f5694c5633f28a858b18fd585ff9002d10e71fc\" returns successfully" Mar 4 01:10:34.652186 containerd[1462]: time="2026-03-04T01:10:34.652130737Z" level=info msg="StopPodSandbox for \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\"" Mar 4 01:10:34.728222 sshd[5595]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:34.743487 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:40556.service: Deactivated successfully. Mar 4 01:10:34.752978 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:10:34.758253 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:10:34.771862 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:40558.service - OpenSSH per-connection server daemon (10.0.0.1:40558). Mar 4 01:10:34.778317 systemd-logind[1444]: Removed session 12. Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.735 [WARNING][5641] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kkp6j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cca7a0a8-cc4f-4da2-beb7-9f56a3aae463", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031", Pod:"coredns-66bc5c9577-kkp6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a547195c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.735 [INFO][5641] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.735 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" iface="eth0" netns="" Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.735 [INFO][5641] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.736 [INFO][5641] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.781 [INFO][5650] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.783 [INFO][5650] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.783 [INFO][5650] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.797 [WARNING][5650] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.797 [INFO][5650] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.800 [INFO][5650] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:34.806174 containerd[1462]: 2026-03-04 01:10:34.803 [INFO][5641] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.806880 containerd[1462]: time="2026-03-04T01:10:34.806158077Z" level=info msg="TearDown network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\" successfully" Mar 4 01:10:34.806880 containerd[1462]: time="2026-03-04T01:10:34.806231494Z" level=info msg="StopPodSandbox for \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\" returns successfully" Mar 4 01:10:34.807507 containerd[1462]: time="2026-03-04T01:10:34.807478559Z" level=info msg="RemovePodSandbox for \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\"" Mar 4 01:10:34.807869 containerd[1462]: time="2026-03-04T01:10:34.807848271Z" level=info msg="Forcibly stopping sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\"" Mar 4 01:10:34.817984 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 40558 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:34.820227 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:34.827805 systemd-logind[1444]: New session 13 of user core. Mar 4 01:10:34.837639 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.871 [WARNING][5673] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kkp6j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cca7a0a8-cc4f-4da2-beb7-9f56a3aae463", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"028b8f9ba968b6fbcc69f40aa8d5aafa7ae1eb046720aa3938589956162eb031", Pod:"coredns-66bc5c9577-kkp6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a547195c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.871 [INFO][5673] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.871 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" iface="eth0" netns="" Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.872 [INFO][5673] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.872 [INFO][5673] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.917 [INFO][5682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.918 [INFO][5682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.918 [INFO][5682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.935 [WARNING][5682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.935 [INFO][5682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" HandleID="k8s-pod-network.0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Workload="localhost-k8s-coredns--66bc5c9577--kkp6j-eth0" Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.938 [INFO][5682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:34.946711 containerd[1462]: 2026-03-04 01:10:34.942 [INFO][5673] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be" Mar 4 01:10:34.949430 containerd[1462]: time="2026-03-04T01:10:34.948471212Z" level=info msg="TearDown network for sandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\" successfully" Mar 4 01:10:34.959093 containerd[1462]: time="2026-03-04T01:10:34.958679883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:10:34.959093 containerd[1462]: time="2026-03-04T01:10:34.958866613Z" level=info msg="RemovePodSandbox \"0e17544d19fbc9a7fa45b149643a14929e6fd18343399b7c47a20b6109a122be\" returns successfully" Mar 4 01:10:34.959691 containerd[1462]: time="2026-03-04T01:10:34.959654196Z" level=info msg="StopPodSandbox for \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\"" Mar 4 01:10:35.005180 sshd[5658]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:35.010730 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:10:35.011550 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:40558.service: Deactivated successfully. Mar 4 01:10:35.015298 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:10:35.018995 systemd-logind[1444]: Removed session 13. Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.034 [WARNING][5708] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--98m57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"92b3426b-65a8-45ba-9289-43631575f549", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6", Pod:"coredns-66bc5c9577-98m57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7845d1da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.035 [INFO][5708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.035 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" iface="eth0" netns="" Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.035 [INFO][5708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.035 [INFO][5708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.066 [INFO][5719] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.066 [INFO][5719] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.066 [INFO][5719] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.074 [WARNING][5719] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.074 [INFO][5719] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.076 [INFO][5719] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:35.083515 containerd[1462]: 2026-03-04 01:10:35.080 [INFO][5708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.084102 containerd[1462]: time="2026-03-04T01:10:35.083655298Z" level=info msg="TearDown network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\" successfully" Mar 4 01:10:35.084102 containerd[1462]: time="2026-03-04T01:10:35.083688651Z" level=info msg="StopPodSandbox for \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\" returns successfully" Mar 4 01:10:35.084900 containerd[1462]: time="2026-03-04T01:10:35.084835195Z" level=info msg="RemovePodSandbox for \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\"" Mar 4 01:10:35.085611 containerd[1462]: time="2026-03-04T01:10:35.085063302Z" level=info msg="Forcibly stopping sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\"" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.158 [WARNING][5736] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--98m57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"92b3426b-65a8-45ba-9289-43631575f549", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c751b72c54459e09f650251a07ab8e7de14819d7dcef638a717be0589a62e1b6", Pod:"coredns-66bc5c9577-98m57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24b7845d1da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.159 [INFO][5736] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.159 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" iface="eth0" netns="" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.159 [INFO][5736] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.159 [INFO][5736] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.195 [INFO][5745] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.196 [INFO][5745] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.196 [INFO][5745] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.220 [WARNING][5745] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.220 [INFO][5745] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" HandleID="k8s-pod-network.4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Workload="localhost-k8s-coredns--66bc5c9577--98m57-eth0" Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.239 [INFO][5745] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:35.259905 containerd[1462]: 2026-03-04 01:10:35.257 [INFO][5736] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82" Mar 4 01:10:35.259905 containerd[1462]: time="2026-03-04T01:10:35.259793255Z" level=info msg="TearDown network for sandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\" successfully" Mar 4 01:10:35.269301 containerd[1462]: time="2026-03-04T01:10:35.269005973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:10:35.269301 containerd[1462]: time="2026-03-04T01:10:35.269254548Z" level=info msg="RemovePodSandbox \"4db64728f1145f7f4181260749cfa6567c63dc42edad9a0af7c9117668f2ad82\" returns successfully" Mar 4 01:10:35.270721 containerd[1462]: time="2026-03-04T01:10:35.270274578Z" level=info msg="StopPodSandbox for \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\"" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.476 [WARNING][5762] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"fbc800a6-d75a-4cc1-8f1f-76421c8e840a", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076", Pod:"calico-apiserver-55f64764bb-8ltws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd2b6d3abcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.479 [INFO][5762] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.480 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" iface="eth0" netns="" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.480 [INFO][5762] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.480 [INFO][5762] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.567 [INFO][5771] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.567 [INFO][5771] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.567 [INFO][5771] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.621 [WARNING][5771] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.621 [INFO][5771] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.673 [INFO][5771] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:35.685130 containerd[1462]: 2026-03-04 01:10:35.681 [INFO][5762] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:35.686522 containerd[1462]: time="2026-03-04T01:10:35.684994909Z" level=info msg="TearDown network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\" successfully" Mar 4 01:10:35.686522 containerd[1462]: time="2026-03-04T01:10:35.685969132Z" level=info msg="StopPodSandbox for \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\" returns successfully" Mar 4 01:10:35.686841 containerd[1462]: time="2026-03-04T01:10:35.686771820Z" level=info msg="RemovePodSandbox for \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\"" Mar 4 01:10:35.686841 containerd[1462]: time="2026-03-04T01:10:35.686809260Z" level=info msg="Forcibly stopping sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\"" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.848 [WARNING][5787] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"fbc800a6-d75a-4cc1-8f1f-76421c8e840a", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f3b3d8c9fb5ed7ac7e22f5b47580488d124f7a9b4f47e10a8e7d2317ecca1076", Pod:"calico-apiserver-55f64764bb-8ltws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd2b6d3abcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.850 [INFO][5787] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.851 [INFO][5787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" iface="eth0" netns="" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.851 [INFO][5787] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.852 [INFO][5787] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.895 [INFO][5796] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.895 [INFO][5796] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.895 [INFO][5796] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.946 [WARNING][5796] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.946 [INFO][5796] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" HandleID="k8s-pod-network.82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Workload="localhost-k8s-calico--apiserver--55f64764bb--8ltws-eth0" Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:35.997 [INFO][5796] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:36.003561 containerd[1462]: 2026-03-04 01:10:36.000 [INFO][5787] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a" Mar 4 01:10:36.003561 containerd[1462]: time="2026-03-04T01:10:36.003429950Z" level=info msg="TearDown network for sandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\" successfully" Mar 4 01:10:36.010459 containerd[1462]: time="2026-03-04T01:10:36.010322040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:10:36.010762 containerd[1462]: time="2026-03-04T01:10:36.010555627Z" level=info msg="RemovePodSandbox \"82b9f48816076fa8ab34cc8d40de1aeb39989e63c57b9050c47631d7fe8d7f9a\" returns successfully" Mar 4 01:10:36.012030 containerd[1462]: time="2026-03-04T01:10:36.011667696Z" level=info msg="StopPodSandbox for \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\"" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.113 [WARNING][5813] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae", Pod:"goldmane-cccfbd5cf-pbgz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98671d973b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.114 [INFO][5813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.114 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" iface="eth0" netns="" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.114 [INFO][5813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.114 [INFO][5813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.157 [INFO][5821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.157 [INFO][5821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.157 [INFO][5821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.166 [WARNING][5821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.166 [INFO][5821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.171 [INFO][5821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:36.178420 containerd[1462]: 2026-03-04 01:10:36.175 [INFO][5813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.179088 containerd[1462]: time="2026-03-04T01:10:36.178492554Z" level=info msg="TearDown network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\" successfully" Mar 4 01:10:36.179088 containerd[1462]: time="2026-03-04T01:10:36.178530775Z" level=info msg="StopPodSandbox for \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\" returns successfully" Mar 4 01:10:36.180116 containerd[1462]: time="2026-03-04T01:10:36.179729780Z" level=info msg="RemovePodSandbox for \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\"" Mar 4 01:10:36.180116 containerd[1462]: time="2026-03-04T01:10:36.179774813Z" level=info msg="Forcibly stopping sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\"" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.260 [WARNING][5840] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"210ae5c1-a8ed-43d0-af95-d0b548ed6ccf", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"307d13fabcb8913b1b3afd3bae1c78bf9083511a4aa1984e0a676d89871e4dae", Pod:"goldmane-cccfbd5cf-pbgz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali98671d973b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.260 [INFO][5840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.260 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" iface="eth0" netns="" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.260 [INFO][5840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.260 [INFO][5840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.354 [INFO][5849] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.354 [INFO][5849] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.354 [INFO][5849] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.368 [WARNING][5849] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.368 [INFO][5849] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" HandleID="k8s-pod-network.c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Workload="localhost-k8s-goldmane--cccfbd5cf--pbgz5-eth0" Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.371 [INFO][5849] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:36.378402 containerd[1462]: 2026-03-04 01:10:36.375 [INFO][5840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6" Mar 4 01:10:36.378945 containerd[1462]: time="2026-03-04T01:10:36.378483880Z" level=info msg="TearDown network for sandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\" successfully" Mar 4 01:10:36.384711 containerd[1462]: time="2026-03-04T01:10:36.384540428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:10:36.384882 containerd[1462]: time="2026-03-04T01:10:36.384770779Z" level=info msg="RemovePodSandbox \"c628578458dd7854cb2df5d05a21ca416d79e49a8d27553b5f6eb29026f13ef6\" returns successfully" Mar 4 01:10:36.385805 containerd[1462]: time="2026-03-04T01:10:36.385572219Z" level=info msg="StopPodSandbox for \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\"" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.436 [WARNING][5866] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"93ede0dd-a20d-4275-9bed-8f0735634773", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7", Pod:"calico-apiserver-55f64764bb-9wz8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7e1ab948c12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.436 [INFO][5866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.436 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" iface="eth0" netns="" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.436 [INFO][5866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.436 [INFO][5866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.472 [INFO][5875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.472 [INFO][5875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.472 [INFO][5875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.480 [WARNING][5875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.480 [INFO][5875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.482 [INFO][5875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:36.487407 containerd[1462]: 2026-03-04 01:10:36.484 [INFO][5866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.488187 containerd[1462]: time="2026-03-04T01:10:36.487469192Z" level=info msg="TearDown network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\" successfully" Mar 4 01:10:36.488187 containerd[1462]: time="2026-03-04T01:10:36.487494749Z" level=info msg="StopPodSandbox for \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\" returns successfully" Mar 4 01:10:36.488966 containerd[1462]: time="2026-03-04T01:10:36.488502140Z" level=info msg="RemovePodSandbox for \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\"" Mar 4 01:10:36.488966 containerd[1462]: time="2026-03-04T01:10:36.488531625Z" level=info msg="Forcibly stopping sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\"" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.537 [WARNING][5893] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0", GenerateName:"calico-apiserver-55f64764bb-", Namespace:"calico-system", SelfLink:"", UID:"93ede0dd-a20d-4275-9bed-8f0735634773", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 9, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f64764bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5300c0c80fe5f9afada314e1828d1b98957001c7772c1ef879975b041d70c2f7", Pod:"calico-apiserver-55f64764bb-9wz8h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7e1ab948c12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.537 [INFO][5893] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.537 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" iface="eth0" netns="" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.537 [INFO][5893] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.537 [INFO][5893] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.565 [INFO][5901] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.566 [INFO][5901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.566 [INFO][5901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.574 [WARNING][5901] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.574 [INFO][5901] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" HandleID="k8s-pod-network.e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Workload="localhost-k8s-calico--apiserver--55f64764bb--9wz8h-eth0" Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.576 [INFO][5901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:36.582484 containerd[1462]: 2026-03-04 01:10:36.579 [INFO][5893] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57" Mar 4 01:10:36.583752 containerd[1462]: time="2026-03-04T01:10:36.583216322Z" level=info msg="TearDown network for sandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\" successfully" Mar 4 01:10:36.588735 containerd[1462]: time="2026-03-04T01:10:36.588539658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:10:36.588735 containerd[1462]: time="2026-03-04T01:10:36.588692485Z" level=info msg="RemovePodSandbox \"e0a5788797c9c09e55b3f001b0c203daf592528daeb42754f0cacdff08e09b57\" returns successfully" Mar 4 01:10:36.589704 containerd[1462]: time="2026-03-04T01:10:36.589509574Z" level=info msg="StopPodSandbox for \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\"" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.645 [WARNING][5919] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" WorkloadEndpoint="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.645 [INFO][5919] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.645 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" iface="eth0" netns="" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.645 [INFO][5919] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.645 [INFO][5919] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.689 [INFO][5928] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.689 [INFO][5928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.689 [INFO][5928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.699 [WARNING][5928] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.699 [INFO][5928] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.701 [INFO][5928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:36.707965 containerd[1462]: 2026-03-04 01:10:36.704 [INFO][5919] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.707965 containerd[1462]: time="2026-03-04T01:10:36.707834920Z" level=info msg="TearDown network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\" successfully" Mar 4 01:10:36.707965 containerd[1462]: time="2026-03-04T01:10:36.707861650Z" level=info msg="StopPodSandbox for \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\" returns successfully" Mar 4 01:10:36.709268 containerd[1462]: time="2026-03-04T01:10:36.709172137Z" level=info msg="RemovePodSandbox for \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\"" Mar 4 01:10:36.709268 containerd[1462]: time="2026-03-04T01:10:36.709261374Z" level=info msg="Forcibly stopping sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\"" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.766 [WARNING][5945] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" WorkloadEndpoint="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.767 [INFO][5945] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.767 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" iface="eth0" netns="" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.767 [INFO][5945] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.767 [INFO][5945] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.810 [INFO][5953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.810 [INFO][5953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.810 [INFO][5953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.819 [WARNING][5953] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.819 [INFO][5953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" HandleID="k8s-pod-network.fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Workload="localhost-k8s-whisker--67c44bcbf7--rr89v-eth0" Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.821 [INFO][5953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:10:36.828298 containerd[1462]: 2026-03-04 01:10:36.824 [INFO][5945] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d" Mar 4 01:10:36.828857 containerd[1462]: time="2026-03-04T01:10:36.828337992Z" level=info msg="TearDown network for sandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\" successfully" Mar 4 01:10:36.835418 containerd[1462]: time="2026-03-04T01:10:36.835178520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:10:36.835644 containerd[1462]: time="2026-03-04T01:10:36.835436553Z" level=info msg="RemovePodSandbox \"fc054c7f6fa290cdb455bcde9786af7b47ad466d58b8fab7588d2582e6e3b37d\" returns successfully" Mar 4 01:10:38.580634 kubelet[2542]: I0304 01:10:38.580550 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:10:38.620059 kubelet[2542]: I0304 01:10:38.617661 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-86df568646-6qfmn" podStartSLOduration=17.883508935000002 podStartE2EDuration="28.617545737s" podCreationTimestamp="2026-03-04 01:10:10 +0000 UTC" firstStartedPulling="2026-03-04 01:10:11.011777209 +0000 UTC m=+39.705829586" lastFinishedPulling="2026-03-04 01:10:21.745814011 +0000 UTC m=+50.439866388" observedRunningTime="2026-03-04 01:10:22.320915767 +0000 UTC m=+51.014968154" watchObservedRunningTime="2026-03-04 01:10:38.617545737 +0000 UTC m=+67.311598134" Mar 4 01:10:39.629510 kubelet[2542]: E0304 01:10:39.629454 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:40.017520 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:40560.service - OpenSSH per-connection server daemon (10.0.0.1:40560). Mar 4 01:10:40.060901 sshd[5988]: Accepted publickey for core from 10.0.0.1 port 40560 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:40.063251 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:40.070041 systemd-logind[1444]: New session 14 of user core. Mar 4 01:10:40.076579 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:10:40.198049 sshd[5988]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:40.203492 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:40560.service: Deactivated successfully. Mar 4 01:10:40.205791 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:10:40.206978 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:10:40.208287 systemd-logind[1444]: Removed session 14. Mar 4 01:10:45.224002 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Mar 4 01:10:45.270957 sshd[6080]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:45.273566 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:45.281988 systemd-logind[1444]: New session 15 of user core. Mar 4 01:10:45.287688 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:10:45.435071 sshd[6080]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:45.449113 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:49918.service: Deactivated successfully. Mar 4 01:10:45.451828 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:10:45.453834 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:10:45.460429 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:49930.service - OpenSSH per-connection server daemon (10.0.0.1:49930). Mar 4 01:10:45.461777 systemd-logind[1444]: Removed session 15. Mar 4 01:10:45.517115 sshd[6094]: Accepted publickey for core from 10.0.0.1 port 49930 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:45.519126 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:45.525943 systemd-logind[1444]: New session 16 of user core. Mar 4 01:10:45.549593 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:10:45.820304 sshd[6094]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:45.837007 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:49930.service: Deactivated successfully. Mar 4 01:10:45.839962 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:10:45.842416 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:10:45.852788 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:49938.service - OpenSSH per-connection server daemon (10.0.0.1:49938). Mar 4 01:10:45.854185 systemd-logind[1444]: Removed session 16. Mar 4 01:10:45.887342 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 49938 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:45.889115 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:45.894851 systemd-logind[1444]: New session 17 of user core. Mar 4 01:10:45.903671 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:10:46.524238 sshd[6106]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:46.530452 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:49952.service - OpenSSH per-connection server daemon (10.0.0.1:49952). Mar 4 01:10:46.538698 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:10:46.544622 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:49938.service: Deactivated successfully. Mar 4 01:10:46.559906 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:10:46.562268 systemd-logind[1444]: Removed session 17. Mar 4 01:10:46.599741 sshd[6153]: Accepted publickey for core from 10.0.0.1 port 49952 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:46.601984 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:46.608430 systemd-logind[1444]: New session 18 of user core. Mar 4 01:10:46.681737 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:10:47.264845 sshd[6153]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:47.274231 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:49952.service: Deactivated successfully. Mar 4 01:10:47.278157 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:10:47.280884 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:10:47.294429 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:49966.service - OpenSSH per-connection server daemon (10.0.0.1:49966). Mar 4 01:10:47.298285 systemd-logind[1444]: Removed session 18. Mar 4 01:10:47.332522 sshd[6170]: Accepted publickey for core from 10.0.0.1 port 49966 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:47.335006 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:47.341809 systemd-logind[1444]: New session 19 of user core. Mar 4 01:10:47.353760 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:10:47.484165 sshd[6170]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:47.488073 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:49966.service: Deactivated successfully. Mar 4 01:10:47.490982 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:10:47.493600 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:10:47.495673 systemd-logind[1444]: Removed session 19. Mar 4 01:10:50.628011 kubelet[2542]: E0304 01:10:50.627894 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:52.502759 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:46490.service - OpenSSH per-connection server daemon (10.0.0.1:46490). Mar 4 01:10:52.553180 sshd[6193]: Accepted publickey for core from 10.0.0.1 port 46490 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:52.554963 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:52.560080 systemd-logind[1444]: New session 20 of user core. Mar 4 01:10:52.567590 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:10:52.695513 sshd[6193]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:52.700485 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:46490.service: Deactivated successfully. Mar 4 01:10:52.703546 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:10:52.705055 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:10:52.706678 systemd-logind[1444]: Removed session 20. Mar 4 01:10:53.633460 kubelet[2542]: E0304 01:10:53.633291 2542 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:57.730919 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:46498.service - OpenSSH per-connection server daemon (10.0.0.1:46498). Mar 4 01:10:57.772487 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 46498 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:10:57.774644 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:57.781428 systemd-logind[1444]: New session 21 of user core. Mar 4 01:10:57.787736 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:10:57.922355 sshd[6207]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:57.926512 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:46498.service: Deactivated successfully. Mar 4 01:10:57.928739 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:10:57.929653 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:10:57.931484 systemd-logind[1444]: Removed session 21. Mar 4 01:11:02.935190 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:37044.service - OpenSSH per-connection server daemon (10.0.0.1:37044). Mar 4 01:11:03.020676 sshd[6221]: Accepted publickey for core from 10.0.0.1 port 37044 ssh2: RSA SHA256:dRPFF0Oglv0K4DyM5i58+GZSmm0aDmrIHoSJ6KMVR7w Mar 4 01:11:03.023054 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:11:03.031507 systemd-logind[1444]: New session 22 of user core. Mar 4 01:11:03.036735 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:11:03.193603 sshd[6221]: pam_unix(sshd:session): session closed for user core Mar 4 01:11:03.198643 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:37044.service: Deactivated successfully. Mar 4 01:11:03.201640 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:11:03.202763 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:11:03.204558 systemd-logind[1444]: Removed session 22. Mar 4 01:11:03.545550 kubelet[2542]: I0304 01:11:03.545252 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"