Mar 2 13:00:18.463687 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:00:18.463718 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:00:18.463730 kernel: BIOS-provided physical RAM map: Mar 2 13:00:18.463736 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 2 13:00:18.463742 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 2 13:00:18.463748 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 2 13:00:18.463755 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 2 13:00:18.463761 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 2 13:00:18.463767 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 2 13:00:18.463773 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 2 13:00:18.463782 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 2 13:00:18.463788 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 2 13:00:18.463816 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 2 13:00:18.463824 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 2 13:00:18.463865 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 2 13:00:18.463874 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 2 13:00:18.463885 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 2 13:00:18.463891 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 2 13:00:18.463898 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 2 13:00:18.463905 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:00:18.463916 kernel: NX (Execute Disable) protection: active Mar 2 13:00:18.463969 kernel: APIC: Static calls initialized Mar 2 13:00:18.463976 kernel: efi: EFI v2.7 by EDK II Mar 2 13:00:18.463983 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 2 13:00:18.463989 kernel: SMBIOS 2.8 present. Mar 2 13:00:18.463996 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 2 13:00:18.464002 kernel: Hypervisor detected: KVM Mar 2 13:00:18.464013 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:00:18.464019 kernel: kvm-clock: using sched offset of 11062520537 cycles Mar 2 13:00:18.464027 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:00:18.464033 kernel: tsc: Detected 2445.424 MHz processor Mar 2 13:00:18.464040 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:00:18.464047 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:00:18.464058 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 2 13:00:18.464065 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 2 13:00:18.464072 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:00:18.464082 kernel: Using GB pages for direct mapping Mar 2 13:00:18.464089 kernel: Secure boot disabled Mar 2 13:00:18.464096 kernel: ACPI: Early table checksum verification disabled Mar 2 13:00:18.464102 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 2 13:00:18.464114 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 2 13:00:18.464121 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:00:18.464128 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:00:18.464138 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 2 13:00:18.464169 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:00:18.464177 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:00:18.464184 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:00:18.464191 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:00:18.464202 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 2 13:00:18.464209 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 2 13:00:18.464226 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 2 13:00:18.464238 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 2 13:00:18.464250 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 2 13:00:18.464259 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 2 13:00:18.464272 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 2 13:00:18.464282 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 2 13:00:18.464293 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 2 13:00:18.464305 kernel: No NUMA configuration found Mar 2 13:00:18.464339 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 2 13:00:18.464356 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 2 13:00:18.464371 kernel: Zone ranges: Mar 2 13:00:18.464383 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:00:18.464396 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 2 13:00:18.464405 kernel: Normal empty Mar 2 13:00:18.464416 kernel: Movable zone start for each node Mar 2 13:00:18.464427 kernel: Early memory node ranges Mar 2 13:00:18.464438 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 2 13:00:18.464524 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 2 13:00:18.464534 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 2 13:00:18.464547 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 2 13:00:18.464554 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 2 13:00:18.464561 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 2 13:00:18.464568 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 2 13:00:18.464575 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:00:18.464582 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 2 13:00:18.464589 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 2 13:00:18.464600 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:00:18.464607 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 2 13:00:18.464618 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 2 13:00:18.464625 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 2 13:00:18.464632 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:00:18.464639 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:00:18.464646 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:00:18.464653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:00:18.464660 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:00:18.464667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:00:18.464674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:00:18.464685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:00:18.464691 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:00:18.464703 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:00:18.464710 kernel: TSC deadline timer available Mar 2 13:00:18.464717 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:00:18.464724 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:00:18.464731 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:00:18.464738 kernel: kvm-guest: setup PV sched yield Mar 2 13:00:18.464745 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 2 13:00:18.464752 kernel: Booting paravirtualized kernel on KVM Mar 2 13:00:18.464763 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:00:18.464770 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:00:18.464777 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:00:18.464784 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:00:18.464791 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:00:18.464798 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:00:18.464805 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:00:18.464813 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:00:18.464843 kernel: random: crng init done Mar 2 13:00:18.464854 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:00:18.464860 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:00:18.464867 kernel: Fallback order for Node 0: 0 Mar 2 13:00:18.464874 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 2 13:00:18.464881 kernel: Policy zone: DMA32 Mar 2 13:00:18.464887 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:00:18.464895 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 2 13:00:18.464901 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:00:18.464912 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:00:18.464919 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:00:18.464925 kernel: Dynamic Preempt: voluntary Mar 2 13:00:18.464932 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:00:18.464955 kernel: rcu: RCU event tracing is enabled. Mar 2 13:00:18.464965 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:00:18.464972 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:00:18.464980 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:00:18.464987 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:00:18.464994 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:00:18.465001 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:00:18.465016 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:00:18.465024 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:00:18.465030 kernel: Console: colour dummy device 80x25 Mar 2 13:00:18.465037 kernel: printk: console [ttyS0] enabled Mar 2 13:00:18.465044 kernel: ACPI: Core revision 20230628 Mar 2 13:00:18.465051 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:00:18.465063 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:00:18.465070 kernel: x2apic enabled Mar 2 13:00:18.465077 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:00:18.465084 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:00:18.465091 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:00:18.465098 kernel: kvm-guest: setup PV IPIs Mar 2 13:00:18.465105 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:00:18.465112 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:00:18.465119 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 2 13:00:18.465134 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:00:18.465215 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:00:18.465226 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:00:18.465234 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:00:18.465241 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:00:18.465249 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:00:18.465256 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:00:18.465263 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:00:18.465276 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:00:18.465283 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:00:18.465290 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:00:18.465320 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:00:18.465328 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:00:18.465335 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:00:18.465343 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:00:18.465350 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:00:18.465371 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:00:18.465382 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:00:18.465390 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:00:18.465396 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:00:18.465403 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:00:18.465410 kernel: landlock: Up and running. Mar 2 13:00:18.465417 kernel: SELinux: Initializing. Mar 2 13:00:18.465429 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:00:18.465442 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:00:18.465558 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:00:18.465579 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:00:18.465593 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:00:18.465604 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:00:18.465616 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:00:18.465626 kernel: signal: max sigframe size: 1776 Mar 2 13:00:18.465643 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:00:18.465657 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:00:18.465717 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:00:18.465731 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:00:18.465750 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:00:18.465810 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:00:18.465823 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:00:18.465835 kernel: smpboot: Max logical packages: 1 Mar 2 13:00:18.465847 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 2 13:00:18.465905 kernel: devtmpfs: initialized Mar 2 13:00:18.465919 kernel: x86/mm: Memory block size: 128MB Mar 2 13:00:18.465933 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 2 13:00:18.465944 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 2 13:00:18.466013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 2 13:00:18.466032 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 2 13:00:18.466043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 2 13:00:18.466097 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:00:18.466111 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:00:18.466124 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:00:18.466132 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:00:18.466171 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:00:18.466187 kernel: audit: type=2000 audit(1772456415.374:1): state=initialized audit_enabled=0 res=1 Mar 2 13:00:18.466194 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:00:18.466201 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:00:18.466208 kernel: cpuidle: using governor menu Mar 2 13:00:18.466215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:00:18.466221 kernel: dca service started, version 1.12.1 Mar 2 13:00:18.466229 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:00:18.466236 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:00:18.466243 kernel: PCI: Using configuration type 1 for base access Mar 2 13:00:18.466253 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:00:18.466260 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:00:18.466267 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:00:18.466280 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:00:18.466287 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:00:18.466294 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:00:18.466301 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:00:18.466308 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:00:18.466315 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:00:18.466325 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:00:18.466332 kernel: ACPI: Interpreter enabled Mar 2 13:00:18.466339 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:00:18.466346 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:00:18.466353 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:00:18.466360 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:00:18.466367 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:00:18.466374 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:00:18.466826 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:00:18.467111 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:00:18.467280 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:00:18.467292 kernel: PCI host bridge to bus 0000:00 Mar 2 13:00:18.467663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:00:18.467823 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:00:18.467998 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:00:18.468156 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:00:18.468345 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:00:18.468605 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 2 13:00:18.468757 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:00:18.469182 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:00:18.469558 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:00:18.469748 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 2 13:00:18.469915 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 2 13:00:18.470070 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 2 13:00:18.470339 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 2 13:00:18.470613 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:00:18.470939 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:00:18.471213 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 2 13:00:18.471378 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 2 13:00:18.471649 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 2 13:00:18.471864 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:00:18.472024 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 2 13:00:18.472179 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 2 13:00:18.472556 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 2 13:00:18.472746 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:00:18.472949 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 2 13:00:18.473142 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 2 13:00:18.473626 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 2 13:00:18.473847 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 2 13:00:18.474165 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:00:18.474409 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:00:18.474779 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:00:18.474959 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 2 13:00:18.475114 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 2 13:00:18.475359 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:00:18.475677 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 2 13:00:18.475697 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:00:18.475711 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:00:18.475724 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:00:18.475735 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:00:18.475756 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:00:18.475768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:00:18.475787 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:00:18.475800 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:00:18.475810 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:00:18.475821 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:00:18.475834 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:00:18.475847 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:00:18.475859 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:00:18.475877 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:00:18.475888 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:00:18.475899 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:00:18.475911 kernel: iommu: Default domain type: Translated Mar 2 13:00:18.475927 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:00:18.475934 kernel: efivars: Registered efivars operations Mar 2 13:00:18.475943 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:00:18.475956 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:00:18.475969 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 2 13:00:18.475985 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 2 13:00:18.475997 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 2 13:00:18.476010 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 2 13:00:18.476313 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:00:18.476543 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:00:18.476704 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:00:18.476715 kernel: vgaarb: loaded Mar 2 13:00:18.476722 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:00:18.476741 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:00:18.476748 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:00:18.476755 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:00:18.476763 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:00:18.476771 kernel: pnp: PnP ACPI init Mar 2 13:00:18.477013 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:00:18.477033 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:00:18.477041 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:00:18.477048 kernel: NET: Registered PF_INET protocol family Mar 2 13:00:18.477060 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:00:18.477067 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:00:18.477074 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:00:18.477081 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:00:18.477088 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:00:18.477095 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:00:18.477102 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:00:18.477109 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:00:18.477120 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:00:18.477127 kernel: NET: Registered PF_XDP protocol family Mar 2 13:00:18.477285 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 2 13:00:18.477442 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 2 13:00:18.477736 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:00:18.477882 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:00:18.478036 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:00:18.478200 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:00:18.478362 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:00:18.478629 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 2 13:00:18.478684 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:00:18.478691 kernel: Initialise system trusted keyrings Mar 2 13:00:18.478699 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:00:18.478710 kernel: Key type asymmetric registered Mar 2 13:00:18.478717 kernel: Asymmetric key parser 'x509' registered Mar 2 13:00:18.478725 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:00:18.478763 kernel: io scheduler mq-deadline registered Mar 2 13:00:18.478777 kernel: io scheduler kyber registered Mar 2 13:00:18.478789 kernel: io scheduler bfq registered Mar 2 13:00:18.478802 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:00:18.478850 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:00:18.478860 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:00:18.478868 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:00:18.478875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:00:18.478882 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:00:18.478889 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:00:18.478902 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:00:18.478909 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:00:18.479108 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:00:18.479123 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:00:18.479269 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:00:18.479416 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:00:17 UTC (1772456417) Mar 2 13:00:18.479805 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:00:18.479819 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:00:18.479833 kernel: efifb: probing for efifb Mar 2 13:00:18.479840 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 2 13:00:18.479848 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 2 13:00:18.479855 kernel: efifb: scrolling: redraw Mar 2 13:00:18.479862 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 2 13:00:18.479869 kernel: Console: switching to colour frame buffer device 100x37 Mar 2 13:00:18.479876 kernel: fb0: EFI VGA frame buffer device Mar 2 13:00:18.479884 kernel: pstore: Using crash dump compression: deflate Mar 2 13:00:18.479891 kernel: pstore: Registered efi_pstore as persistent store backend Mar 2 13:00:18.479907 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:00:18.479915 kernel: Segment Routing with IPv6 Mar 2 13:00:18.479922 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:00:18.479929 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:00:18.479937 kernel: Key type dns_resolver registered Mar 2 13:00:18.479944 kernel: IPI shorthand broadcast: enabled Mar 2 13:00:18.479975 kernel: sched_clock: Marking stable (1205063587, 375968177)->(1973712428, -392680664) Mar 2 13:00:18.479986 kernel: registered taskstats version 1 Mar 2 13:00:18.479993 kernel: Loading compiled-in X.509 certificates Mar 2 13:00:18.480004 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:00:18.480011 kernel: Key type .fscrypt registered Mar 2 13:00:18.480019 kernel: Key type fscrypt-provisioning registered Mar 2 13:00:18.480026 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:00:18.480034 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:00:18.480041 kernel: ima: No architecture policies found Mar 2 13:00:18.480048 kernel: clk: Disabling unused clocks Mar 2 13:00:18.480061 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:00:18.480073 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:00:18.480092 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:00:18.480152 kernel: Run /init as init process Mar 2 13:00:18.480169 kernel: with arguments: Mar 2 13:00:18.480177 kernel: /init Mar 2 13:00:18.480185 kernel: with environment: Mar 2 13:00:18.480192 kernel: HOME=/ Mar 2 13:00:18.480238 kernel: TERM=linux Mar 2 13:00:18.480249 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:00:18.480268 systemd[1]: Detected virtualization kvm. Mar 2 13:00:18.480276 systemd[1]: Detected architecture x86-64. Mar 2 13:00:18.480284 systemd[1]: Running in initrd. Mar 2 13:00:18.480292 systemd[1]: No hostname configured, using default hostname. Mar 2 13:00:18.480299 systemd[1]: Hostname set to . Mar 2 13:00:18.480308 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:00:18.480316 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:00:18.480327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:00:18.480335 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:00:18.480343 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:00:18.480351 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:00:18.480359 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:00:18.480373 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:00:18.480383 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:00:18.480391 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:00:18.480403 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:00:18.480411 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:00:18.480419 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:00:18.480427 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:00:18.480438 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:00:18.480500 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:00:18.480520 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:00:18.480533 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:00:18.480541 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:00:18.480549 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:00:18.480557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:00:18.480565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:00:18.480573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:00:18.480586 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:00:18.480594 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:00:18.480602 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:00:18.480610 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:00:18.480618 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:00:18.480631 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:00:18.480639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:00:18.480647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:00:18.480658 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:00:18.480695 systemd-journald[194]: Collecting audit messages is disabled. Mar 2 13:00:18.480715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:00:18.480724 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:00:18.480736 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:00:18.480745 systemd-journald[194]: Journal started Mar 2 13:00:18.480762 systemd-journald[194]: Runtime Journal (/run/log/journal/ba0497fea0134c0086a596378bd26170) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:00:18.527573 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:00:18.523710 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:00:18.526648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:00:18.530790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:00:18.549398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:00:18.553202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:00:18.566016 systemd-modules-load[195]: Inserted module 'overlay' Mar 2 13:00:18.579627 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:00:18.584387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:00:18.621196 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:00:18.632134 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:00:18.655622 dracut-cmdline[223]: dracut-dracut-053 Mar 2 13:00:18.659755 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:00:20.788773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:00:20.815872 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 2 13:00:20.819360 kernel: Bridge firewalling registered Mar 2 13:00:20.821369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:00:20.830522 kernel: SCSI subsystem initialized Mar 2 13:00:20.835820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:00:20.846763 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:00:20.855285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:00:20.868566 kernel: iscsi: registered transport (tcp) Mar 2 13:00:20.878863 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:00:20.916313 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:00:20.916399 kernel: QLogic iSCSI HBA Driver Mar 2 13:00:21.059331 kernel: hrtimer: interrupt took 11226499 ns Mar 2 13:00:21.112344 systemd-resolved[304]: Positive Trust Anchors: Mar 2 13:00:21.112406 systemd-resolved[304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:00:21.112517 systemd-resolved[304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:00:21.118667 systemd-resolved[304]: Defaulting to hostname 'linux'. Mar 2 13:00:21.127028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:00:21.165898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:00:21.412170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:00:21.448301 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:00:21.515329 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:00:21.515594 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:00:21.519538 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:00:21.578539 kernel: raid6: avx2x4 gen() 21152 MB/s Mar 2 13:00:21.610585 kernel: raid6: avx2x2 gen() 16929 MB/s Mar 2 13:00:21.631030 kernel: raid6: avx2x1 gen() 11633 MB/s Mar 2 13:00:21.631107 kernel: raid6: using algorithm avx2x4 gen() 21152 MB/s Mar 2 13:00:21.652301 kernel: raid6: .... xor() 3949 MB/s, rmw enabled Mar 2 13:00:21.652619 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:00:21.715604 kernel: xor: automatically using best checksumming function avx Mar 2 13:00:22.017432 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:00:22.047591 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:00:22.065723 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:00:22.090192 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 2 13:00:22.114017 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:00:22.125096 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:00:22.147930 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 2 13:00:22.220561 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:00:22.238733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:00:22.368751 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:00:22.381933 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:00:22.422299 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:00:22.428566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:00:22.431897 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:00:22.437963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:00:22.452809 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:00:22.468991 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:00:22.512112 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:00:22.524154 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:00:22.538824 kernel: libata version 3.00 loaded. Mar 2 13:00:22.540596 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:00:22.583875 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:00:22.584198 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:00:22.595069 kernel: AES CTR mode by8 optimization enabled Mar 2 13:00:22.595091 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:00:22.601888 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:00:22.601927 kernel: GPT:9289727 != 19775487 Mar 2 13:00:22.601938 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:00:22.607992 kernel: GPT:9289727 != 19775487 Mar 2 13:00:22.612617 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:00:22.612707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:00:22.620322 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:00:22.620858 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:00:22.630903 kernel: scsi host0: ahci Mar 2 13:00:22.631554 kernel: scsi host1: ahci Mar 2 13:00:22.638911 kernel: scsi host2: ahci Mar 2 13:00:22.643348 kernel: scsi host3: ahci Mar 2 13:00:22.652058 kernel: scsi host4: ahci Mar 2 13:00:22.653113 kernel: scsi host5: ahci Mar 2 13:00:22.653385 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Mar 2 13:00:22.660753 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Mar 2 13:00:22.660847 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Mar 2 13:00:22.664520 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Mar 2 13:00:22.669786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:00:22.713780 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Mar 2 13:00:22.717550 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Mar 2 13:00:22.670010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:00:22.676841 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:00:22.734579 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Mar 2 13:00:22.700226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:00:22.741429 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Mar 2 13:00:22.700694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:00:22.700797 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:00:22.747720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:00:22.794895 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:00:22.808289 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:00:22.811856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:00:22.827394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:00:22.836955 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:00:22.841908 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:00:22.902299 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:00:22.922717 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:00:22.964938 disk-uuid[553]: Primary Header is updated. Mar 2 13:00:22.964938 disk-uuid[553]: Secondary Entries is updated. Mar 2 13:00:22.964938 disk-uuid[553]: Secondary Header is updated. Mar 2 13:00:22.983926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:00:23.001653 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:00:23.005570 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:00:23.005642 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:00:23.012547 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:00:23.012598 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:00:23.029807 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:00:23.030442 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:00:23.027047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:00:23.060654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:00:23.060688 kernel: ata3.00: applying bridge limits Mar 2 13:00:23.060737 kernel: ata3.00: configured for UDMA/100 Mar 2 13:00:23.060754 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:00:23.154561 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:00:23.154975 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:00:23.176766 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:00:24.057566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:00:24.058573 disk-uuid[557]: The operation has completed successfully. Mar 2 13:00:24.114894 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:00:24.115117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:00:24.136777 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:00:24.145648 sh[592]: Success Mar 2 13:00:24.165563 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:00:24.228088 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:00:24.245821 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:00:24.250117 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:00:24.278727 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:00:24.278784 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:00:24.278817 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:00:24.285585 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:00:24.285625 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:00:24.297561 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:00:24.303039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:00:24.321768 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:00:24.327912 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:00:24.345975 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:00:24.346034 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:00:24.346055 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:00:24.353559 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:00:24.367567 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:00:24.372999 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:00:24.382139 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:00:24.407773 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:00:24.541763 ignition[679]: Ignition 2.19.0 Mar 2 13:00:24.542365 ignition[679]: Stage: fetch-offline Mar 2 13:00:24.542490 ignition[679]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:00:24.542525 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:00:24.542658 ignition[679]: parsed url from cmdline: "" Mar 2 13:00:24.542663 ignition[679]: no config URL provided Mar 2 13:00:24.542670 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:00:24.542681 ignition[679]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:00:24.542716 ignition[679]: op(1): [started] loading QEMU firmware config module Mar 2 13:00:24.562690 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:00:24.542722 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:00:24.575958 ignition[679]: op(1): [finished] loading QEMU firmware config module Mar 2 13:00:24.583818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:00:24.620245 systemd-networkd[780]: lo: Link UP Mar 2 13:00:24.620268 systemd-networkd[780]: lo: Gained carrier Mar 2 13:00:24.622782 systemd-networkd[780]: Enumeration completed Mar 2 13:00:24.623597 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:00:24.624800 systemd[1]: Reached target network.target - Network. Mar 2 13:00:24.625638 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:00:24.625642 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:00:24.627422 systemd-networkd[780]: eth0: Link UP Mar 2 13:00:24.627427 systemd-networkd[780]: eth0: Gained carrier Mar 2 13:00:24.627435 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:00:24.685613 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:00:24.793540 ignition[679]: parsing config with SHA512: 46f571eefdb2e921ca651511a4b1528adf475a817f52a147d41a181b31ea7c13822f2c9b03910c2f117a10874ca7d2f08aecd51e855fb388dc39db0e05b69fcf Mar 2 13:00:24.804694 unknown[679]: fetched base config from "system" Mar 2 13:00:24.804753 unknown[679]: fetched user config from "qemu" Mar 2 13:00:24.805857 ignition[679]: fetch-offline: fetch-offline passed Mar 2 13:00:24.808684 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:00:24.806094 ignition[679]: Ignition finished successfully Mar 2 13:00:24.812620 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:00:24.827834 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:00:24.856995 ignition[784]: Ignition 2.19.0 Mar 2 13:00:24.857020 ignition[784]: Stage: kargs Mar 2 13:00:24.857257 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:00:24.857272 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:00:24.863799 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:00:24.858098 ignition[784]: kargs: kargs passed Mar 2 13:00:24.858149 ignition[784]: Ignition finished successfully Mar 2 13:00:24.875746 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:00:25.211108 ignition[792]: Ignition 2.19.0 Mar 2 13:00:25.211144 ignition[792]: Stage: disks Mar 2 13:00:25.211418 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:00:25.215223 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:00:25.211440 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:00:25.220685 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:00:25.212744 ignition[792]: disks: disks passed Mar 2 13:00:25.226091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:00:25.212806 ignition[792]: Ignition finished successfully Mar 2 13:00:25.229321 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:00:25.232171 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:00:25.237873 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:00:25.254812 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:00:25.275406 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:00:25.281864 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:00:25.304615 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:00:25.446566 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:00:25.446736 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:00:25.451378 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:00:25.472939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:00:25.489219 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:00:25.510532 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Mar 2 13:00:25.511810 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:00:25.532409 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:00:25.532441 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:00:25.532494 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:00:25.532543 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:00:25.511949 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:00:25.511987 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:00:25.535083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:00:25.540413 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:00:25.570859 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:00:25.627543 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:00:25.636135 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:00:25.642816 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:00:25.650353 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:00:25.861722 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:00:25.896584 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:00:25.899636 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:00:25.916717 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:00:25.922303 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:00:25.950539 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:00:25.998076 ignition[925]: INFO : Ignition 2.19.0 Mar 2 13:00:25.998076 ignition[925]: INFO : Stage: mount Mar 2 13:00:26.003440 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:00:26.003440 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:00:26.003440 ignition[925]: INFO : mount: mount passed Mar 2 13:00:26.003440 ignition[925]: INFO : Ignition finished successfully Mar 2 13:00:26.019618 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:00:26.040846 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:00:26.060314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:00:26.113688 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 2 13:00:26.122091 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:00:26.122250 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:00:26.122293 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:00:26.132701 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:00:26.139700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:00:26.225107 ignition[955]: INFO : Ignition 2.19.0 Mar 2 13:00:26.225107 ignition[955]: INFO : Stage: files Mar 2 13:00:26.229550 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:00:26.229550 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:00:26.247352 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:00:26.252630 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:00:26.252630 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:00:26.269240 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:00:26.274925 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:00:26.280834 unknown[955]: wrote ssh authorized keys file for user: core Mar 2 13:00:26.287779 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:00:26.287779 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:00:26.287779 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:00:26.387161 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:00:26.560676 systemd-networkd[780]: eth0: Gained IPv6LL Mar 2 13:00:26.731195 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:00:26.731195 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:00:26.740697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 2 13:00:27.076612 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:00:28.906376 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:00:28.906376 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 13:00:28.915355 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:00:28.994955 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:00:29.006371 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:00:29.010646 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:00:29.010646 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:00:29.010646 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:00:29.010646 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:00:29.010646 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:00:29.010646 ignition[955]: INFO : files: files passed Mar 2 13:00:29.010646 ignition[955]: INFO : Ignition finished successfully Mar 2 13:00:29.044264 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:00:29.060696 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:00:29.067679 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:00:29.073683 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:00:29.073900 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:00:29.136774 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:00:29.149618 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:00:29.149618 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:00:29.162202 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:00:29.181021 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:00:29.188095 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:00:29.206890 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:00:29.249997 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:00:29.250213 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:00:29.256814 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:00:29.263054 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:00:29.268954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:00:29.284926 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:00:29.302863 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:00:29.329759 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:00:29.344115 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:00:29.345671 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:00:29.346282 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:00:29.485813 ignition[1010]: INFO : Ignition 2.19.0 Mar 2 13:00:29.485813 ignition[1010]: INFO : Stage: umount Mar 2 13:00:29.485813 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:00:29.485813 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:00:29.485813 ignition[1010]: INFO : umount: umount passed Mar 2 13:00:29.485813 ignition[1010]: INFO : Ignition finished successfully Mar 2 13:00:29.347626 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:00:29.347837 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:00:29.349436 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:00:29.349932 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:00:29.350613 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:00:29.351584 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:00:29.352349 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:00:29.353218 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:00:29.354723 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:00:29.355335 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:00:29.356665 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:00:29.357145 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:00:29.357577 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:00:29.357803 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:00:29.358404 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:00:29.359345 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:00:29.361094 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:00:29.361323 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:00:29.361589 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:00:29.361721 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:00:29.362336 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:00:29.362492 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:00:29.362885 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:00:29.363183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:00:29.366637 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:00:29.366932 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:00:29.367319 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:00:29.368248 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:00:29.368362 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:00:29.369592 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:00:29.369702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:00:29.370098 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:00:29.370220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:00:29.370577 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:00:29.370700 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:00:29.372004 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:00:29.373220 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:00:29.374222 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:00:29.374328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:00:29.375141 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:00:29.375238 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:00:29.379763 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:00:29.379877 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:00:29.411367 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:00:29.423013 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:00:29.423197 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:00:29.427152 systemd[1]: Stopped target network.target - Network. Mar 2 13:00:29.428289 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:00:29.428392 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:00:29.429374 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:00:29.429433 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:00:29.430732 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:00:29.430833 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:00:29.431328 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:00:29.431378 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:00:29.433316 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:00:29.433640 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:00:29.481781 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:00:29.483115 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:00:29.483819 systemd-networkd[780]: eth0: DHCPv6 lease lost Mar 2 13:00:29.493848 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:00:29.494113 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:00:29.498872 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:00:29.499122 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:00:29.508623 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:00:29.508738 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:00:29.512565 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:00:29.512648 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:00:29.541390 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:00:29.636097 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:00:29.638682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:00:29.646015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:00:29.646191 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:00:29.653393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:00:29.653510 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:00:29.660609 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:00:29.660730 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:00:29.970365 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 2 13:00:29.669590 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:00:29.710621 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:00:29.710869 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:00:29.738744 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:00:29.740656 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:00:29.745899 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:00:29.745958 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:00:29.751968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:00:29.752186 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:00:29.757081 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:00:29.757167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:00:29.766416 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:00:29.766814 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:00:29.775067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:00:29.775154 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:00:29.838057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:00:29.841346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:00:29.841491 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:00:29.847025 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:00:29.847105 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:00:29.854835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:00:29.854905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:00:29.860167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:00:29.860242 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:00:29.867870 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:00:29.868009 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:00:29.873975 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:00:29.912788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:00:29.921636 systemd[1]: Switching root. Mar 2 13:00:30.131041 systemd-journald[194]: Journal stopped Mar 2 13:00:32.122693 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:00:32.126559 kernel: SELinux: policy capability open_perms=1 Mar 2 13:00:32.126598 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:00:32.126619 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:00:32.126638 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:00:32.126658 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:00:32.126685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:00:32.126750 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:00:32.126791 kernel: audit: type=1403 audit(1772456430.218:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:00:32.126813 systemd[1]: Successfully loaded SELinux policy in 68.292ms. Mar 2 13:00:32.126861 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.654ms. Mar 2 13:00:32.126884 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:00:32.126936 systemd[1]: Detected virtualization kvm. Mar 2 13:00:32.126957 systemd[1]: Detected architecture x86-64. Mar 2 13:00:32.126978 systemd[1]: Detected first boot. Mar 2 13:00:32.127031 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:00:32.127054 zram_generator::config[1052]: No configuration found. Mar 2 13:00:32.127076 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:00:32.127097 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:00:32.127117 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:00:32.127138 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:00:32.127159 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:00:32.127180 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:00:32.127229 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:00:32.127250 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:00:32.127271 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:00:32.127292 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:00:32.127321 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:00:32.127342 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:00:32.127362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:00:32.127384 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:00:32.127404 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:00:32.127499 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:00:32.127575 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:00:32.127598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:00:32.127618 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:00:32.127638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:00:32.127658 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:00:32.127678 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:00:32.127698 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:00:32.127718 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:00:32.127771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:00:32.127793 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:00:32.127813 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:00:32.127833 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:00:32.127853 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:00:32.127874 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:00:32.127904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:00:32.127926 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:00:32.127977 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:00:32.127999 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:00:32.128018 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:00:32.128039 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:00:32.128059 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:00:32.128101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:00:32.128123 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:00:32.128143 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:00:32.128163 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:00:32.128213 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:00:32.128234 systemd[1]: Reached target machines.target - Containers. Mar 2 13:00:32.128255 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:00:32.128275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:00:32.128294 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:00:32.128314 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:00:32.128333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:00:32.128353 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:00:32.128403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:00:32.128426 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:00:32.128492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:00:32.128516 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:00:32.128565 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:00:32.128587 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:00:32.128605 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:00:32.128625 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:00:32.128643 kernel: fuse: init (API version 7.39) Mar 2 13:00:32.128733 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:00:32.128754 kernel: ACPI: bus type drm_connector registered Mar 2 13:00:32.128775 kernel: loop: module loaded Mar 2 13:00:32.128829 systemd-journald[1137]: Collecting audit messages is disabled. Mar 2 13:00:32.128868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:00:32.128890 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:00:32.128944 systemd-journald[1137]: Journal started Mar 2 13:00:32.128979 systemd-journald[1137]: Runtime Journal (/run/log/journal/ba0497fea0134c0086a596378bd26170) is 6.0M, max 48.3M, 42.2M free. Mar 2 13:00:31.289090 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:00:31.314023 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:00:31.315119 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:00:31.315802 systemd[1]: systemd-journald.service: Consumed 1.992s CPU time. Mar 2 13:00:32.135086 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:00:32.147023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:00:32.152976 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:00:32.153022 systemd[1]: Stopped verity-setup.service. Mar 2 13:00:32.162574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:00:32.170183 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:00:32.174625 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:00:32.178654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:00:32.202942 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:00:32.205834 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:00:32.209807 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:00:32.213025 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:00:32.216158 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:00:32.219943 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:00:32.223890 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:00:32.224215 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:00:32.228015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:00:32.228348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:00:32.232105 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:00:32.232414 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:00:32.235895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:00:32.236199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:00:32.240024 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:00:32.240330 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:00:32.243924 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:00:32.244225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:00:32.247738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:00:32.251113 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:00:32.254857 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:00:32.281281 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:00:32.318136 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:00:32.323686 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:00:32.326832 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:00:32.326897 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:00:32.331562 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:00:32.346758 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:00:32.352175 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:00:32.355089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:00:32.357698 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:00:32.362591 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:00:32.366866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:00:32.369363 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:00:32.373282 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:00:32.380631 systemd-journald[1137]: Time spent on flushing to /var/log/journal/ba0497fea0134c0086a596378bd26170 is 142.025ms for 981 entries. Mar 2 13:00:32.380631 systemd-journald[1137]: System Journal (/var/log/journal/ba0497fea0134c0086a596378bd26170) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:00:32.649317 systemd-journald[1137]: Received client request to flush runtime journal. Mar 2 13:00:32.649960 kernel: loop0: detected capacity change from 0 to 219192 Mar 2 13:00:32.375496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:00:32.385887 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:00:32.651824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:00:32.658051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:00:32.661651 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:00:32.670146 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:00:32.675645 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:00:32.679802 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:00:32.685747 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:00:32.689837 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:00:32.700123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:00:32.710168 kernel: loop1: detected capacity change from 0 to 140768 Mar 2 13:00:32.713926 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:00:32.723673 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:00:32.728602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:00:32.825884 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:00:32.830255 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 2 13:00:32.830278 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 2 13:00:32.831641 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:00:32.845924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:00:32.850510 kernel: loop2: detected capacity change from 0 to 142488 Mar 2 13:00:32.866030 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:00:32.872653 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 2 13:00:32.933519 kernel: loop3: detected capacity change from 0 to 219192 Mar 2 13:00:32.941076 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:00:32.952487 kernel: loop4: detected capacity change from 0 to 140768 Mar 2 13:00:33.040023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:00:33.042251 kernel: loop5: detected capacity change from 0 to 142488 Mar 2 13:00:33.073057 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:00:33.074397 (sd-merge)[1190]: Merged extensions into '/usr'. Mar 2 13:00:33.081109 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:00:33.081277 systemd[1]: Reloading... Mar 2 13:00:33.115602 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 2 13:00:33.115638 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 2 13:00:33.506523 zram_generator::config[1216]: No configuration found. Mar 2 13:00:33.700398 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:00:33.744530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:00:33.807030 systemd[1]: Reloading finished in 724 ms. Mar 2 13:00:33.846693 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:00:33.851306 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:00:33.855819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:00:34.189211 systemd[1]: Starting ensure-sysext.service... Mar 2 13:00:34.193317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:00:34.203209 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:00:34.203253 systemd[1]: Reloading... Mar 2 13:00:34.274708 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:00:34.275147 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:00:34.277253 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:00:34.277778 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 2 13:00:34.280711 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 2 13:00:34.297252 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:00:34.297421 systemd-tmpfiles[1259]: Skipping /boot Mar 2 13:00:34.304572 zram_generator::config[1285]: No configuration found. Mar 2 13:00:34.333879 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:00:34.334048 systemd-tmpfiles[1259]: Skipping /boot Mar 2 13:00:34.458781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:00:34.543759 systemd[1]: Reloading finished in 339 ms. Mar 2 13:00:34.577264 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:00:34.595289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:00:34.620998 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:00:34.626874 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:00:34.632078 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:00:34.640297 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:00:34.649896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:00:34.655950 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:00:34.662936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:00:34.663217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:00:34.665436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:00:34.680248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:00:34.691989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:00:34.695800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:00:34.708899 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:00:34.713610 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:00:34.716106 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:00:34.721954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:00:34.722246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:00:34.727688 augenrules[1348]: No rules Mar 2 13:00:34.729371 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:00:34.735195 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:00:34.735657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:00:34.740155 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Mar 2 13:00:34.741606 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:00:34.741956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:00:34.747855 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:00:34.765626 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:00:34.776940 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:00:34.796924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:00:34.803785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:00:34.805873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:00:34.812871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:00:34.823199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:00:34.831919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:00:34.841419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:00:34.848046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:00:34.854920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:00:34.869007 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:00:34.872982 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:00:34.873155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:00:34.876649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:00:34.876963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:00:34.883737 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:00:34.884047 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:00:34.890924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:00:34.891237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:00:34.899876 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:00:34.900164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:00:34.905570 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:00:34.921277 systemd[1]: Finished ensure-sysext.service. Mar 2 13:00:34.927511 systemd-resolved[1330]: Positive Trust Anchors: Mar 2 13:00:34.928305 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:00:34.928388 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:00:34.940182 systemd-resolved[1330]: Defaulting to hostname 'linux'. Mar 2 13:00:34.946018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:00:35.275711 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:00:35.275855 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:00:35.280759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:00:35.280950 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:00:35.300653 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1382) Mar 2 13:00:35.307742 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:00:35.528488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 13:00:35.535510 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:00:35.543811 systemd-networkd[1389]: lo: Link UP Mar 2 13:00:35.543832 systemd-networkd[1389]: lo: Gained carrier Mar 2 13:00:35.547366 systemd-networkd[1389]: Enumeration completed Mar 2 13:00:35.547649 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:00:35.552289 systemd[1]: Reached target network.target - Network. Mar 2 13:00:35.559755 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:00:35.559773 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:00:35.561323 systemd-networkd[1389]: eth0: Link UP Mar 2 13:00:35.561338 systemd-networkd[1389]: eth0: Gained carrier Mar 2 13:00:35.561358 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:00:35.568936 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:00:35.578636 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:00:35.581421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:00:35.628413 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 2 13:00:35.634332 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:00:35.649954 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:00:35.650403 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:00:35.628311 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:00:35.675358 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:00:35.693361 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:00:35.234288 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:00:35.252854 systemd-journald[1137]: Time jumped backwards, rotating. Mar 2 13:00:35.252949 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:00:35.234400 systemd-timesyncd[1404]: Initial clock synchronization to Mon 2026-03-02 13:00:35.233937 UTC. Mar 2 13:00:35.243957 systemd-resolved[1330]: Clock change detected. Flushing caches. Mar 2 13:00:35.264748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:00:35.560077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:00:35.588028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:00:35.588499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:00:35.619108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:00:35.873991 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:00:35.931865 kernel: kvm_amd: TSC scaling supported Mar 2 13:00:35.932009 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:00:35.932033 kernel: kvm_amd: Nested Paging enabled Mar 2 13:00:35.935840 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:00:35.935940 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:00:36.002713 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:00:36.032064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:00:36.051386 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:00:36.064186 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:00:36.271228 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:00:36.324610 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:00:36.328737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:00:36.332015 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:00:36.335399 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:00:36.339147 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:00:36.342947 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:00:36.346077 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:00:36.349852 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:00:36.353389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:00:36.353449 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:00:36.356081 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:00:36.360271 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:00:36.365985 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:00:36.383734 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:00:36.388805 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:00:36.394118 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:00:36.400049 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:00:36.403858 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:00:36.407815 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:00:36.407884 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:00:36.410384 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:00:36.422773 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:00:36.428835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:00:36.429110 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:00:36.436920 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:00:36.441436 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:00:36.446859 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:00:36.460536 dbus-daemon[1434]: [system] SELinux support is enabled Mar 2 13:00:36.466846 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:00:36.474472 jq[1435]: false Mar 2 13:00:36.474830 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:00:36.480433 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:00:36.488443 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:00:36.493002 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:00:36.493810 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:00:36.495693 extend-filesystems[1436]: Found loop3 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found loop4 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found loop5 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found sr0 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda1 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda2 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda3 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found usr Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda4 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda6 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda7 Mar 2 13:00:36.501473 extend-filesystems[1436]: Found vda9 Mar 2 13:00:36.501473 extend-filesystems[1436]: Checking size of /dev/vda9 Mar 2 13:00:36.565692 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:00:36.565724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1375) Mar 2 13:00:36.499910 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:00:36.566105 extend-filesystems[1436]: Resized partition /dev/vda9 Mar 2 13:00:36.565465 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:00:36.569954 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:00:36.577481 jq[1457]: true Mar 2 13:00:36.579530 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:00:36.595517 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:00:36.595921 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:00:36.605354 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:00:36.617760 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:00:36.617760 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:00:36.617760 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:00:36.605836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:00:36.630119 update_engine[1448]: I20260302 13:00:36.628621 1448 main.cc:92] Flatcar Update Engine starting Mar 2 13:00:36.630475 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Mar 2 13:00:36.606364 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:00:36.634849 update_engine[1448]: I20260302 13:00:36.631903 1448 update_check_scheduler.cc:74] Next update check in 8m38s Mar 2 13:00:36.606780 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:00:36.634741 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:00:36.634774 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:00:36.637817 systemd-logind[1447]: New seat seat0. Mar 2 13:00:36.641918 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:00:36.642222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:00:36.647441 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:00:36.651556 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:00:36.652059 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:00:36.666101 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:00:36.669274 jq[1463]: true Mar 2 13:00:36.684356 systemd-networkd[1389]: eth0: Gained IPv6LL Mar 2 13:00:36.695194 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 13:00:36.701751 tar[1460]: linux-amd64/LICENSE Mar 2 13:00:36.702158 tar[1460]: linux-amd64/helm Mar 2 13:00:36.702213 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:00:36.784322 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:00:36.793402 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:00:36.807375 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:00:36.823834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:00:36.832443 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:00:36.838171 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:00:36.838514 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:00:36.846102 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:00:36.846281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:00:36.852129 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:00:36.866112 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:00:36.880533 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:00:36.881765 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:00:36.938108 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:00:36.939946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:00:37.055227 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:00:37.055539 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:00:37.061783 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:00:37.080302 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:00:37.082444 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:00:37.086783 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:00:37.105176 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:00:37.105476 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:00:37.240299 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:00:37.354898 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:00:37.367132 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:00:37.374051 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:00:37.378874 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:00:38.446379 containerd[1464]: time="2026-03-02T13:00:38.445959466Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:00:38.572153 containerd[1464]: time="2026-03-02T13:00:38.571795119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.577507854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.577616287Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.577638598Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578030651Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578113986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578302017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578332324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578799296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578827268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578872823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:00:38.579620 containerd[1464]: time="2026-03-02T13:00:38.578932304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:00:38.580364 containerd[1464]: time="2026-03-02T13:00:38.579156944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:00:38.580549 containerd[1464]: time="2026-03-02T13:00:38.580528755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:00:38.580883 containerd[1464]: time="2026-03-02T13:00:38.580859232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:00:38.580942 containerd[1464]: time="2026-03-02T13:00:38.580928241Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:00:38.581186 containerd[1464]: time="2026-03-02T13:00:38.581166336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:00:38.581330 containerd[1464]: time="2026-03-02T13:00:38.581312339Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:00:38.590259 containerd[1464]: time="2026-03-02T13:00:38.590200668Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:00:38.590437 containerd[1464]: time="2026-03-02T13:00:38.590393439Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:00:38.590468 containerd[1464]: time="2026-03-02T13:00:38.590448171Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:00:38.590489 containerd[1464]: time="2026-03-02T13:00:38.590477024Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:00:38.590520 containerd[1464]: time="2026-03-02T13:00:38.590504586Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:00:38.590909 containerd[1464]: time="2026-03-02T13:00:38.590855351Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:00:38.591626 containerd[1464]: time="2026-03-02T13:00:38.591526023Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:00:38.591889 containerd[1464]: time="2026-03-02T13:00:38.591842364Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:00:38.591918 containerd[1464]: time="2026-03-02T13:00:38.591894021Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:00:38.592054 containerd[1464]: time="2026-03-02T13:00:38.592005659Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:00:38.592054 containerd[1464]: time="2026-03-02T13:00:38.592044582Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592124 containerd[1464]: time="2026-03-02T13:00:38.592065992Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592124 containerd[1464]: time="2026-03-02T13:00:38.592103101Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592179 containerd[1464]: time="2026-03-02T13:00:38.592123159Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592179 containerd[1464]: time="2026-03-02T13:00:38.592143517Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592270 containerd[1464]: time="2026-03-02T13:00:38.592208428Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592270 containerd[1464]: time="2026-03-02T13:00:38.592262499Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592338 containerd[1464]: time="2026-03-02T13:00:38.592280823Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:00:38.592380 containerd[1464]: time="2026-03-02T13:00:38.592370461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592415 containerd[1464]: time="2026-03-02T13:00:38.592390528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592507 containerd[1464]: time="2026-03-02T13:00:38.592476759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592546 containerd[1464]: time="2026-03-02T13:00:38.592509030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592613 containerd[1464]: time="2026-03-02T13:00:38.592545588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592880 containerd[1464]: time="2026-03-02T13:00:38.592700116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592880 containerd[1464]: time="2026-03-02T13:00:38.592740502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592880 containerd[1464]: time="2026-03-02T13:00:38.592790505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592880 containerd[1464]: time="2026-03-02T13:00:38.592809941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592880 containerd[1464]: time="2026-03-02T13:00:38.592831041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.592880 containerd[1464]: time="2026-03-02T13:00:38.592848533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.593013 containerd[1464]: time="2026-03-02T13:00:38.592884941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.593013 containerd[1464]: time="2026-03-02T13:00:38.592909597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.593013 containerd[1464]: time="2026-03-02T13:00:38.592931067Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:00:38.593013 containerd[1464]: time="2026-03-02T13:00:38.592972995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.593013 containerd[1464]: time="2026-03-02T13:00:38.592990098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.593013 containerd[1464]: time="2026-03-02T13:00:38.593004745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:00:38.597107 containerd[1464]: time="2026-03-02T13:00:38.593172257Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:00:38.597107 containerd[1464]: time="2026-03-02T13:00:38.597044258Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:00:38.597107 containerd[1464]: time="2026-03-02T13:00:38.597437032Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:00:38.598415 containerd[1464]: time="2026-03-02T13:00:38.598367048Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:00:38.598415 containerd[1464]: time="2026-03-02T13:00:38.598388208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.598534 containerd[1464]: time="2026-03-02T13:00:38.598490659Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:00:38.598743 containerd[1464]: time="2026-03-02T13:00:38.598538428Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:00:38.598743 containerd[1464]: time="2026-03-02T13:00:38.598594503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:00:38.599926 containerd[1464]: time="2026-03-02T13:00:38.599769046Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:00:38.599926 containerd[1464]: time="2026-03-02T13:00:38.599877328Z" level=info msg="Connect containerd service" Mar 2 13:00:38.600965 containerd[1464]: time="2026-03-02T13:00:38.600009184Z" level=info msg="using legacy CRI server" Mar 2 13:00:38.600965 containerd[1464]: time="2026-03-02T13:00:38.600048658Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:00:38.600965 containerd[1464]: time="2026-03-02T13:00:38.600406877Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:00:38.602476 containerd[1464]: time="2026-03-02T13:00:38.602378379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:00:38.602974 containerd[1464]: time="2026-03-02T13:00:38.602802731Z" level=info msg="Start subscribing containerd event" Mar 2 13:00:38.681256 containerd[1464]: time="2026-03-02T13:00:38.678101072Z" level=info msg="Start recovering state" Mar 2 13:00:38.682454 containerd[1464]: time="2026-03-02T13:00:38.681767656Z" level=info msg="Start event monitor" Mar 2 13:00:38.682454 containerd[1464]: time="2026-03-02T13:00:38.681968401Z" level=info msg="Start snapshots syncer" Mar 2 13:00:38.682454 containerd[1464]: time="2026-03-02T13:00:38.682029455Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:00:38.682454 containerd[1464]: time="2026-03-02T13:00:38.682052067Z" level=info msg="Start streaming server" Mar 2 13:00:38.683705 containerd[1464]: time="2026-03-02T13:00:38.683624082Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:00:38.684341 containerd[1464]: time="2026-03-02T13:00:38.684273506Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:00:38.688722 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:00:38.695617 containerd[1464]: time="2026-03-02T13:00:38.692947092Z" level=info msg="containerd successfully booted in 0.251226s" Mar 2 13:00:38.865750 tar[1460]: linux-amd64/README.md Mar 2 13:00:38.888814 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:00:40.245198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:40.249734 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:00:40.253838 systemd[1]: Startup finished in 1.395s (kernel) + 12.373s (initrd) + 10.573s (userspace) = 24.342s. Mar 2 13:00:40.261406 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:00:41.029842 kubelet[1548]: E0302 13:00:41.029556 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:00:41.034356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:00:41.034833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:00:41.035459 systemd[1]: kubelet.service: Consumed 3.459s CPU time. Mar 2 13:00:45.599385 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:00:45.623773 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:58078.service - OpenSSH per-connection server daemon (10.0.0.1:58078). Mar 2 13:00:45.698238 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 58078 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:45.701255 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:45.728381 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:00:45.745125 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:00:45.747899 systemd-logind[1447]: New session 1 of user core. Mar 2 13:00:45.769295 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:00:45.791397 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:00:45.796444 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:00:45.979090 systemd[1565]: Queued start job for default target default.target. Mar 2 13:00:45.992460 systemd[1565]: Created slice app.slice - User Application Slice. Mar 2 13:00:45.992519 systemd[1565]: Reached target paths.target - Paths. Mar 2 13:00:45.992536 systemd[1565]: Reached target timers.target - Timers. Mar 2 13:00:45.994874 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:00:46.013050 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:00:46.013974 systemd[1565]: Reached target sockets.target - Sockets. Mar 2 13:00:46.014065 systemd[1565]: Reached target basic.target - Basic System. Mar 2 13:00:46.014212 systemd[1565]: Reached target default.target - Main User Target. Mar 2 13:00:46.014272 systemd[1565]: Startup finished in 207ms. Mar 2 13:00:46.014405 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:00:46.026071 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:00:46.239629 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:58090.service - OpenSSH per-connection server daemon (10.0.0.1:58090). Mar 2 13:00:46.390090 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 58090 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:46.392922 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:46.402351 systemd-logind[1447]: New session 2 of user core. Mar 2 13:00:46.411871 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:00:46.485946 sshd[1576]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:46.494406 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:58090.service: Deactivated successfully. Mar 2 13:00:46.497025 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:00:46.499165 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:00:46.512932 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:58102.service - OpenSSH per-connection server daemon (10.0.0.1:58102). Mar 2 13:00:46.536462 systemd-logind[1447]: Removed session 2. Mar 2 13:00:46.572503 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 58102 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:46.575383 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:46.581799 systemd-logind[1447]: New session 3 of user core. Mar 2 13:00:46.595930 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:00:46.653278 sshd[1583]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:46.663002 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:58102.service: Deactivated successfully. Mar 2 13:00:46.665359 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:00:46.666938 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:00:46.681971 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:58114.service - OpenSSH per-connection server daemon (10.0.0.1:58114). Mar 2 13:00:46.683265 systemd-logind[1447]: Removed session 3. Mar 2 13:00:46.715934 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 58114 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:46.718048 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:46.724655 systemd-logind[1447]: New session 4 of user core. Mar 2 13:00:46.743233 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:00:46.826066 sshd[1590]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:46.840906 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:58114.service: Deactivated successfully. Mar 2 13:00:46.843607 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:00:46.845952 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:00:46.858103 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:58128.service - OpenSSH per-connection server daemon (10.0.0.1:58128). Mar 2 13:00:46.859615 systemd-logind[1447]: Removed session 4. Mar 2 13:00:46.929307 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 58128 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:46.932221 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:46.940053 systemd-logind[1447]: New session 5 of user core. Mar 2 13:00:46.950857 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:00:47.059042 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:00:47.059943 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:00:47.081124 sudo[1600]: pam_unix(sudo:session): session closed for user root Mar 2 13:00:47.088175 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:47.100885 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:58128.service: Deactivated successfully. Mar 2 13:00:47.103060 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:00:47.105293 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:00:47.164405 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:58136.service - OpenSSH per-connection server daemon (10.0.0.1:58136). Mar 2 13:00:47.178754 systemd-logind[1447]: Removed session 5. Mar 2 13:00:47.293213 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 58136 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:47.304734 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:47.337275 systemd-logind[1447]: New session 6 of user core. Mar 2 13:00:47.349471 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:00:47.437090 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:00:47.437775 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:00:47.444865 sudo[1609]: pam_unix(sudo:session): session closed for user root Mar 2 13:00:47.458133 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:00:47.458924 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:00:48.374994 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:00:48.477936 auditctl[1612]: No rules Mar 2 13:00:48.479832 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:00:48.480322 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:00:48.542926 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:00:48.676919 augenrules[1630]: No rules Mar 2 13:00:48.678144 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:00:48.680611 sudo[1608]: pam_unix(sudo:session): session closed for user root Mar 2 13:00:48.687685 sshd[1605]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:48.697075 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:58136.service: Deactivated successfully. Mar 2 13:00:48.699374 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:00:48.700618 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:00:48.714476 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:58148.service - OpenSSH per-connection server daemon (10.0.0.1:58148). Mar 2 13:00:48.717468 systemd-logind[1447]: Removed session 6. Mar 2 13:00:48.787288 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 58148 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:00:48.790021 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:48.799894 systemd-logind[1447]: New session 7 of user core. Mar 2 13:00:48.809815 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:00:48.967053 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:00:48.967522 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:00:51.505946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:00:53.554858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:00:54.173070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:00:54.179105 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:00:54.656647 kubelet[1666]: E0302 13:00:54.653439 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:00:54.662229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:00:54.662483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:00:54.662970 systemd[1]: kubelet.service: Consumed 1.187s CPU time. Mar 2 13:00:54.718925 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:00:54.721876 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:00:56.393281 dockerd[1677]: time="2026-03-02T13:00:56.393041160Z" level=info msg="Starting up" Mar 2 13:00:57.399319 dockerd[1677]: time="2026-03-02T13:00:57.398695520Z" level=info msg="Loading containers: start." Mar 2 13:00:57.950692 kernel: Initializing XFRM netlink socket Mar 2 13:00:58.255614 systemd-networkd[1389]: docker0: Link UP Mar 2 13:00:58.312087 dockerd[1677]: time="2026-03-02T13:00:58.311806341Z" level=info msg="Loading containers: done." Mar 2 13:00:58.399619 dockerd[1677]: time="2026-03-02T13:00:58.399402117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:00:58.400018 dockerd[1677]: time="2026-03-02T13:00:58.399860663Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:00:58.400490 dockerd[1677]: time="2026-03-02T13:00:58.400229391Z" level=info msg="Daemon has completed initialization" Mar 2 13:00:58.708321 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:00:58.711054 dockerd[1677]: time="2026-03-02T13:00:58.708884760Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:01:02.273888 containerd[1464]: time="2026-03-02T13:01:02.273537451Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 2 13:01:03.480184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974781256.mount: Deactivated successfully. Mar 2 13:01:05.045135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:01:05.093685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:05.668901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:05.669280 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:01:05.880280 kubelet[1886]: E0302 13:01:05.880070 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:01:05.886203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:01:05.886533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:01:07.641833 containerd[1464]: time="2026-03-02T13:01:07.641651779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:07.642864 containerd[1464]: time="2026-03-02T13:01:07.641995992Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 2 13:01:07.644228 containerd[1464]: time="2026-03-02T13:01:07.644072480Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:07.854494 containerd[1464]: time="2026-03-02T13:01:07.854244128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:07.856937 containerd[1464]: time="2026-03-02T13:01:07.856749453Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 5.582880624s" Mar 2 13:01:07.857035 containerd[1464]: time="2026-03-02T13:01:07.856942694Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 2 13:01:07.870182 containerd[1464]: time="2026-03-02T13:01:07.870100447Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 2 13:01:11.298315 containerd[1464]: time="2026-03-02T13:01:11.298117568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:11.299754 containerd[1464]: time="2026-03-02T13:01:11.298974119Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 2 13:01:11.300438 containerd[1464]: time="2026-03-02T13:01:11.300351634Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:11.305641 containerd[1464]: time="2026-03-02T13:01:11.305531535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:11.307830 containerd[1464]: time="2026-03-02T13:01:11.307485756Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 3.437311313s" Mar 2 13:01:11.307830 containerd[1464]: time="2026-03-02T13:01:11.307535083Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 2 13:01:11.310100 containerd[1464]: time="2026-03-02T13:01:11.310067444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 2 13:01:13.659803 containerd[1464]: time="2026-03-02T13:01:13.659636344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:13.661894 containerd[1464]: time="2026-03-02T13:01:13.661858882Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 2 13:01:13.664024 containerd[1464]: time="2026-03-02T13:01:13.663953274Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:13.668763 containerd[1464]: time="2026-03-02T13:01:13.668708930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:13.670687 containerd[1464]: time="2026-03-02T13:01:13.670645110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.360536815s" Mar 2 13:01:13.670819 containerd[1464]: time="2026-03-02T13:01:13.670691513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 2 13:01:13.673138 containerd[1464]: time="2026-03-02T13:01:13.672875975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 2 13:01:16.074371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:01:16.085889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:16.541294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:16.559346 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:01:16.740519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218577185.mount: Deactivated successfully. Mar 2 13:01:17.052904 kubelet[1923]: E0302 13:01:17.052736 1923 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:01:17.057059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:01:17.057382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:01:17.058026 systemd[1]: kubelet.service: Consumed 1.069s CPU time. Mar 2 13:01:17.687627 containerd[1464]: time="2026-03-02T13:01:17.687390179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:17.688700 containerd[1464]: time="2026-03-02T13:01:17.688221361Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 2 13:01:17.690237 containerd[1464]: time="2026-03-02T13:01:17.690054019Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:17.700506 containerd[1464]: time="2026-03-02T13:01:17.700116597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:17.700506 containerd[1464]: time="2026-03-02T13:01:17.700538653Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 4.027633136s" Mar 2 13:01:17.700506 containerd[1464]: time="2026-03-02T13:01:17.700629708Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 2 13:01:17.703215 containerd[1464]: time="2026-03-02T13:01:17.703148910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 2 13:01:18.459237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739214370.mount: Deactivated successfully. Mar 2 13:01:21.791246 containerd[1464]: time="2026-03-02T13:01:21.791040724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:21.792724 containerd[1464]: time="2026-03-02T13:01:21.791843450Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 2 13:01:21.793707 containerd[1464]: time="2026-03-02T13:01:21.793646443Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:21.799713 containerd[1464]: time="2026-03-02T13:01:21.799524079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:21.804003 containerd[1464]: time="2026-03-02T13:01:21.803903982Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.100715522s" Mar 2 13:01:21.804003 containerd[1464]: time="2026-03-02T13:01:21.803986632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 2 13:01:21.806785 containerd[1464]: time="2026-03-02T13:01:21.806721248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 13:01:21.936893 update_engine[1448]: I20260302 13:01:21.936273 1448 update_attempter.cc:509] Updating boot flags... Mar 2 13:01:22.305721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1995) Mar 2 13:01:22.471874 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1993) Mar 2 13:01:22.562628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1993) Mar 2 13:01:22.944918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2386716532.mount: Deactivated successfully. Mar 2 13:01:22.951944 containerd[1464]: time="2026-03-02T13:01:22.951870482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:22.953258 containerd[1464]: time="2026-03-02T13:01:22.953162652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 13:01:22.954886 containerd[1464]: time="2026-03-02T13:01:22.954735977Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:22.957752 containerd[1464]: time="2026-03-02T13:01:22.957650699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:22.958686 containerd[1464]: time="2026-03-02T13:01:22.958511466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.151729107s" Mar 2 13:01:22.958686 containerd[1464]: time="2026-03-02T13:01:22.958602271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 13:01:22.960871 containerd[1464]: time="2026-03-02T13:01:22.960696880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 2 13:01:23.652150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845852170.mount: Deactivated successfully. Mar 2 13:01:25.173057 containerd[1464]: time="2026-03-02T13:01:25.172799376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:25.174057 containerd[1464]: time="2026-03-02T13:01:25.173460692Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 2 13:01:25.174929 containerd[1464]: time="2026-03-02T13:01:25.174853279Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:25.179631 containerd[1464]: time="2026-03-02T13:01:25.179465228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:01:25.181521 containerd[1464]: time="2026-03-02T13:01:25.181431917Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 2.220701907s" Mar 2 13:01:25.181521 containerd[1464]: time="2026-03-02T13:01:25.181501182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 2 13:01:27.077100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:01:27.086980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:27.835314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:27.865264 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:01:28.186334 kubelet[2100]: E0302 13:01:28.185030 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:01:28.270162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:01:28.272974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:01:28.287071 systemd[1]: kubelet.service: Consumed 1.002s CPU time. Mar 2 13:01:34.050433 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:34.050724 systemd[1]: kubelet.service: Consumed 1.002s CPU time. Mar 2 13:01:34.066741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:34.139192 systemd[1]: Reloading requested from client PID 2115 ('systemctl') (unit session-7.scope)... Mar 2 13:01:34.139380 systemd[1]: Reloading... Mar 2 13:01:34.387763 zram_generator::config[2154]: No configuration found. Mar 2 13:01:35.102052 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:01:36.106871 systemd[1]: Reloading finished in 1965 ms. Mar 2 13:01:36.601808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:36.670705 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:01:36.672051 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:36.672746 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:01:36.673302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:36.703154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:38.200480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:38.273441 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:01:38.841948 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:01:38.846313 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:01:38.848044 kubelet[2204]: I0302 13:01:38.847818 2204 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:01:42.777826 kubelet[2204]: I0302 13:01:42.777512 2204 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 13:01:42.777826 kubelet[2204]: I0302 13:01:42.777708 2204 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:01:42.777826 kubelet[2204]: I0302 13:01:42.777857 2204 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:01:42.777826 kubelet[2204]: I0302 13:01:42.777866 2204 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:01:42.780150 kubelet[2204]: I0302 13:01:42.778760 2204 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:01:42.808684 kubelet[2204]: E0302 13:01:42.808526 2204 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:01:42.811186 kubelet[2204]: I0302 13:01:42.810977 2204 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:01:42.833205 kubelet[2204]: E0302 13:01:42.833080 2204 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:01:42.833205 kubelet[2204]: I0302 13:01:42.833206 2204 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:01:42.844410 kubelet[2204]: I0302 13:01:42.844263 2204 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:01:42.847081 kubelet[2204]: I0302 13:01:42.846893 2204 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:01:42.847478 kubelet[2204]: I0302 13:01:42.847018 2204 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:01:42.848332 kubelet[2204]: I0302 13:01:42.847512 2204 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:01:42.848332 kubelet[2204]: I0302 13:01:42.847535 2204 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 13:01:42.848332 kubelet[2204]: I0302 13:01:42.847833 2204 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:01:42.850629 kubelet[2204]: I0302 13:01:42.850485 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:01:42.851151 kubelet[2204]: I0302 13:01:42.851088 2204 kubelet.go:475] "Attempting to sync node with API server" Mar 2 13:01:42.851213 kubelet[2204]: I0302 13:01:42.851174 2204 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:01:42.851298 kubelet[2204]: I0302 13:01:42.851287 2204 kubelet.go:387] "Adding apiserver pod source" Mar 2 13:01:42.851622 kubelet[2204]: I0302 13:01:42.851407 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:01:42.854018 kubelet[2204]: E0302 13:01:42.853905 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:01:42.854302 kubelet[2204]: E0302 13:01:42.854228 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:01:42.854515 kubelet[2204]: I0302 13:01:42.854478 2204 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:01:42.855680 kubelet[2204]: I0302 13:01:42.855548 2204 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:01:42.855777 kubelet[2204]: I0302 13:01:42.855687 2204 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:01:42.855973 kubelet[2204]: W0302 13:01:42.855907 2204 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:01:42.862720 kubelet[2204]: I0302 13:01:42.862679 2204 server.go:1262] "Started kubelet" Mar 2 13:01:42.863412 kubelet[2204]: I0302 13:01:42.863337 2204 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:01:42.863497 kubelet[2204]: I0302 13:01:42.863466 2204 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:01:42.865202 kubelet[2204]: I0302 13:01:42.863946 2204 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:01:42.865202 kubelet[2204]: I0302 13:01:42.863946 2204 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:01:42.871604 kubelet[2204]: I0302 13:01:42.870365 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:01:42.871604 kubelet[2204]: I0302 13:01:42.870917 2204 server.go:310] "Adding debug handlers to kubelet server" Mar 2 13:01:42.873971 kubelet[2204]: E0302 13:01:42.872653 2204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189907cc68f3d558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:01:42.862533976 +0000 UTC m=+4.565140803,LastTimestamp:2026-03-02 13:01:42.862533976 +0000 UTC m=+4.565140803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:01:42.875679 kubelet[2204]: I0302 13:01:42.875655 2204 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:01:42.878797 kubelet[2204]: E0302 13:01:42.878775 2204 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:01:42.879064 kubelet[2204]: I0302 13:01:42.879011 2204 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 13:01:42.879279 kubelet[2204]: I0302 13:01:42.879220 2204 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:01:42.880010 kubelet[2204]: I0302 13:01:42.879907 2204 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:01:42.880331 kubelet[2204]: E0302 13:01:42.880258 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Mar 2 13:01:42.881769 kubelet[2204]: I0302 13:01:42.881675 2204 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:01:42.882729 kubelet[2204]: I0302 13:01:42.881987 2204 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:01:42.882729 kubelet[2204]: E0302 13:01:42.882187 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:01:42.882729 kubelet[2204]: E0302 13:01:42.882492 2204 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:01:42.884897 kubelet[2204]: I0302 13:01:42.884848 2204 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:01:42.888892 kubelet[2204]: I0302 13:01:42.888778 2204 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:01:42.906497 kubelet[2204]: I0302 13:01:42.905361 2204 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:01:42.906497 kubelet[2204]: I0302 13:01:42.905395 2204 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:01:42.906497 kubelet[2204]: I0302 13:01:42.905413 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:01:42.907932 kubelet[2204]: I0302 13:01:42.907915 2204 policy_none.go:49] "None policy: Start" Mar 2 13:01:42.908089 kubelet[2204]: I0302 13:01:42.908040 2204 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:01:42.908212 kubelet[2204]: I0302 13:01:42.908197 2204 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:01:42.912395 kubelet[2204]: I0302 13:01:42.912372 2204 policy_none.go:47] "Start" Mar 2 13:01:42.918898 kubelet[2204]: I0302 13:01:42.918842 2204 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:01:42.919066 kubelet[2204]: I0302 13:01:42.919007 2204 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 13:01:42.919147 kubelet[2204]: I0302 13:01:42.919077 2204 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 13:01:42.919178 kubelet[2204]: E0302 13:01:42.919155 2204 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:01:42.919739 kubelet[2204]: E0302 13:01:42.919656 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:01:42.927632 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:01:42.950184 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:01:42.954704 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:01:42.968956 kubelet[2204]: E0302 13:01:42.968911 2204 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:01:42.969279 kubelet[2204]: I0302 13:01:42.969220 2204 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:01:42.969279 kubelet[2204]: I0302 13:01:42.969245 2204 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:01:42.969767 kubelet[2204]: I0302 13:01:42.969744 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:01:42.971166 kubelet[2204]: E0302 13:01:42.971027 2204 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:01:42.971166 kubelet[2204]: E0302 13:01:42.971146 2204 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:01:43.034330 systemd[1]: Created slice kubepods-burstable-poda1019f87bbbe55d28d84ffb21a6dac61.slice - libcontainer container kubepods-burstable-poda1019f87bbbe55d28d84ffb21a6dac61.slice. Mar 2 13:01:43.056751 kubelet[2204]: E0302 13:01:43.056670 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:43.060595 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 2 13:01:43.063496 kubelet[2204]: E0302 13:01:43.063433 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:43.066605 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 2 13:01:43.069009 kubelet[2204]: E0302 13:01:43.068969 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:43.071325 kubelet[2204]: I0302 13:01:43.071222 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:01:43.071837 kubelet[2204]: E0302 13:01:43.071775 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Mar 2 13:01:43.080770 kubelet[2204]: I0302 13:01:43.080699 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1019f87bbbe55d28d84ffb21a6dac61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1019f87bbbe55d28d84ffb21a6dac61\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:43.080770 kubelet[2204]: I0302 13:01:43.080753 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1019f87bbbe55d28d84ffb21a6dac61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1019f87bbbe55d28d84ffb21a6dac61\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:43.080770 kubelet[2204]: I0302 13:01:43.080774 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:43.080960 kubelet[2204]: I0302 13:01:43.080791 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:43.080960 kubelet[2204]: I0302 13:01:43.080807 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:43.080960 kubelet[2204]: I0302 13:01:43.080822 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:43.080960 kubelet[2204]: I0302 13:01:43.080879 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1019f87bbbe55d28d84ffb21a6dac61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1019f87bbbe55d28d84ffb21a6dac61\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:43.080960 kubelet[2204]: I0302 13:01:43.080899 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:43.081236 kubelet[2204]: I0302 13:01:43.080941 2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:43.081607 kubelet[2204]: E0302 13:01:43.081501 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Mar 2 13:01:43.285853 kubelet[2204]: I0302 13:01:43.284864 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:01:43.286128 kubelet[2204]: E0302 13:01:43.285976 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Mar 2 13:01:43.362373 kubelet[2204]: E0302 13:01:43.362261 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:43.364274 containerd[1464]: time="2026-03-02T13:01:43.364155019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1019f87bbbe55d28d84ffb21a6dac61,Namespace:kube-system,Attempt:0,}" Mar 2 13:01:43.365910 kubelet[2204]: E0302 13:01:43.365862 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:43.367868 containerd[1464]: time="2026-03-02T13:01:43.367801328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 2 13:01:43.372810 kubelet[2204]: E0302 13:01:43.372745 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:43.373487 containerd[1464]: time="2026-03-02T13:01:43.373420384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 2 13:01:43.484477 kubelet[2204]: E0302 13:01:43.484247 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Mar 2 13:01:43.690204 kubelet[2204]: I0302 13:01:43.690127 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:01:43.691075 kubelet[2204]: E0302 13:01:43.691006 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Mar 2 13:01:43.773937 kubelet[2204]: E0302 13:01:43.773848 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:01:43.875505 kubelet[2204]: E0302 13:01:43.875423 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:01:43.916480 kubelet[2204]: E0302 13:01:43.916430 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:01:43.932465 kubelet[2204]: E0302 13:01:43.932395 2204 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:01:43.957481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074592180.mount: Deactivated successfully. Mar 2 13:01:43.965587 containerd[1464]: time="2026-03-02T13:01:43.965466263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:01:43.966441 containerd[1464]: time="2026-03-02T13:01:43.966334637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:01:43.972151 containerd[1464]: time="2026-03-02T13:01:43.969880269Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:01:43.974945 containerd[1464]: time="2026-03-02T13:01:43.974635699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:01:43.974945 containerd[1464]: time="2026-03-02T13:01:43.974876053Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:01:43.980049 containerd[1464]: time="2026-03-02T13:01:43.979485845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:01:43.980049 containerd[1464]: time="2026-03-02T13:01:43.979678440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:01:44.005441 containerd[1464]: time="2026-03-02T13:01:44.005185351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:01:44.017365 containerd[1464]: time="2026-03-02T13:01:44.017255403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.849338ms" Mar 2 13:01:44.019081 containerd[1464]: time="2026-03-02T13:01:44.019034007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.094442ms" Mar 2 13:01:44.031172 containerd[1464]: time="2026-03-02T13:01:44.030995150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.473437ms" Mar 2 13:01:44.289486 kubelet[2204]: E0302 13:01:44.288906 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Mar 2 13:01:44.303738 containerd[1464]: time="2026-03-02T13:01:44.303499970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:01:44.303952 containerd[1464]: time="2026-03-02T13:01:44.303686396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:01:44.303952 containerd[1464]: time="2026-03-02T13:01:44.303711062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:01:44.304986 containerd[1464]: time="2026-03-02T13:01:44.304890962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:01:44.308606 containerd[1464]: time="2026-03-02T13:01:44.308124159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:01:44.308606 containerd[1464]: time="2026-03-02T13:01:44.308183028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:01:44.308606 containerd[1464]: time="2026-03-02T13:01:44.308198938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:01:44.308606 containerd[1464]: time="2026-03-02T13:01:44.308330923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:01:44.342876 containerd[1464]: time="2026-03-02T13:01:44.342647948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:01:44.342876 containerd[1464]: time="2026-03-02T13:01:44.342747123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:01:44.342876 containerd[1464]: time="2026-03-02T13:01:44.342771248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:01:44.343188 containerd[1464]: time="2026-03-02T13:01:44.343045245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:01:44.395037 systemd[1]: Started cri-containerd-f3c70ddb7cb55e100cd121e67730838ba534e57555e4bb5b3d940e876288fb9d.scope - libcontainer container f3c70ddb7cb55e100cd121e67730838ba534e57555e4bb5b3d940e876288fb9d. Mar 2 13:01:44.407433 systemd[1]: Started cri-containerd-dd9ed0d0ac82335db00d74067c25b00c7575adc7564b96567ede658632cea86c.scope - libcontainer container dd9ed0d0ac82335db00d74067c25b00c7575adc7564b96567ede658632cea86c. Mar 2 13:01:44.461904 systemd[1]: Started cri-containerd-574a840a03c82b62bbb1c1b03aa9695e8b26ca28e60bbf2927412d01f1ba93f2.scope - libcontainer container 574a840a03c82b62bbb1c1b03aa9695e8b26ca28e60bbf2927412d01f1ba93f2. Mar 2 13:01:44.536239 kubelet[2204]: I0302 13:01:44.533971 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:01:44.554545 kubelet[2204]: E0302 13:01:44.551499 2204 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Mar 2 13:01:44.563946 containerd[1464]: time="2026-03-02T13:01:44.552452305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3c70ddb7cb55e100cd121e67730838ba534e57555e4bb5b3d940e876288fb9d\"" Mar 2 13:01:44.576682 kubelet[2204]: E0302 13:01:44.576527 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:44.645668 containerd[1464]: time="2026-03-02T13:01:44.643807004Z" level=info msg="CreateContainer within sandbox \"f3c70ddb7cb55e100cd121e67730838ba534e57555e4bb5b3d940e876288fb9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:01:44.657367 containerd[1464]: time="2026-03-02T13:01:44.657260085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd9ed0d0ac82335db00d74067c25b00c7575adc7564b96567ede658632cea86c\"" Mar 2 13:01:44.660486 kubelet[2204]: E0302 13:01:44.660295 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:44.665787 containerd[1464]: time="2026-03-02T13:01:44.665729751Z" level=info msg="CreateContainer within sandbox \"dd9ed0d0ac82335db00d74067c25b00c7575adc7564b96567ede658632cea86c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:01:44.667817 containerd[1464]: time="2026-03-02T13:01:44.667753556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a1019f87bbbe55d28d84ffb21a6dac61,Namespace:kube-system,Attempt:0,} returns sandbox id \"574a840a03c82b62bbb1c1b03aa9695e8b26ca28e60bbf2927412d01f1ba93f2\"" Mar 2 13:01:44.670360 kubelet[2204]: E0302 13:01:44.670308 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:44.675675 containerd[1464]: time="2026-03-02T13:01:44.675638628Z" level=info msg="CreateContainer within sandbox \"574a840a03c82b62bbb1c1b03aa9695e8b26ca28e60bbf2927412d01f1ba93f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:01:44.684037 containerd[1464]: time="2026-03-02T13:01:44.683952834Z" level=info msg="CreateContainer within sandbox \"f3c70ddb7cb55e100cd121e67730838ba534e57555e4bb5b3d940e876288fb9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"545af570cd0a8eddf5787f7f8747ccad8d4746f740e848d367945117c0ef9d43\"" Mar 2 13:01:44.685618 containerd[1464]: time="2026-03-02T13:01:44.685482977Z" level=info msg="StartContainer for \"545af570cd0a8eddf5787f7f8747ccad8d4746f740e848d367945117c0ef9d43\"" Mar 2 13:01:44.702423 containerd[1464]: time="2026-03-02T13:01:44.702331255Z" level=info msg="CreateContainer within sandbox \"dd9ed0d0ac82335db00d74067c25b00c7575adc7564b96567ede658632cea86c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95549b5148f3db17bf26679e16d32dd9a5cee5dd8df7d66e852f175777b55421\"" Mar 2 13:01:44.704345 containerd[1464]: time="2026-03-02T13:01:44.703439842Z" level=info msg="StartContainer for \"95549b5148f3db17bf26679e16d32dd9a5cee5dd8df7d66e852f175777b55421\"" Mar 2 13:01:44.705409 containerd[1464]: time="2026-03-02T13:01:44.705383280Z" level=info msg="CreateContainer within sandbox \"574a840a03c82b62bbb1c1b03aa9695e8b26ca28e60bbf2927412d01f1ba93f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f6ef7045d35090f48cb298a33ee357c8becf85e41b47604918e26926e81f08a\"" Mar 2 13:01:44.707345 containerd[1464]: time="2026-03-02T13:01:44.707275728Z" level=info msg="StartContainer for \"7f6ef7045d35090f48cb298a33ee357c8becf85e41b47604918e26926e81f08a\"" Mar 2 13:01:44.732784 systemd[1]: Started cri-containerd-545af570cd0a8eddf5787f7f8747ccad8d4746f740e848d367945117c0ef9d43.scope - libcontainer container 545af570cd0a8eddf5787f7f8747ccad8d4746f740e848d367945117c0ef9d43. Mar 2 13:01:44.744743 systemd[1]: Started cri-containerd-7f6ef7045d35090f48cb298a33ee357c8becf85e41b47604918e26926e81f08a.scope - libcontainer container 7f6ef7045d35090f48cb298a33ee357c8becf85e41b47604918e26926e81f08a. Mar 2 13:01:44.749443 systemd[1]: Started cri-containerd-95549b5148f3db17bf26679e16d32dd9a5cee5dd8df7d66e852f175777b55421.scope - libcontainer container 95549b5148f3db17bf26679e16d32dd9a5cee5dd8df7d66e852f175777b55421. Mar 2 13:01:44.935831 kubelet[2204]: E0302 13:01:44.934629 2204 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:01:44.974161 containerd[1464]: time="2026-03-02T13:01:44.973730737Z" level=info msg="StartContainer for \"95549b5148f3db17bf26679e16d32dd9a5cee5dd8df7d66e852f175777b55421\" returns successfully" Mar 2 13:01:44.983997 containerd[1464]: time="2026-03-02T13:01:44.983921868Z" level=info msg="StartContainer for \"545af570cd0a8eddf5787f7f8747ccad8d4746f740e848d367945117c0ef9d43\" returns successfully" Mar 2 13:01:44.996283 containerd[1464]: time="2026-03-02T13:01:44.996184819Z" level=info msg="StartContainer for \"7f6ef7045d35090f48cb298a33ee357c8becf85e41b47604918e26926e81f08a\" returns successfully" Mar 2 13:01:45.962703 kubelet[2204]: E0302 13:01:45.962605 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:45.963431 kubelet[2204]: E0302 13:01:45.962798 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:45.978518 kubelet[2204]: E0302 13:01:45.978405 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:45.978985 kubelet[2204]: E0302 13:01:45.978780 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:45.982385 kubelet[2204]: E0302 13:01:45.982259 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:45.982517 kubelet[2204]: E0302 13:01:45.982397 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:46.189134 kubelet[2204]: I0302 13:01:46.188405 2204 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:01:46.988744 kubelet[2204]: E0302 13:01:46.988627 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:46.988744 kubelet[2204]: E0302 13:01:46.988815 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:46.990325 kubelet[2204]: E0302 13:01:46.989422 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:46.990325 kubelet[2204]: E0302 13:01:46.989555 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:46.990325 kubelet[2204]: E0302 13:01:46.990237 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:46.990396 kubelet[2204]: E0302 13:01:46.990361 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:48.268958 kubelet[2204]: E0302 13:01:48.268831 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:48.270811 kubelet[2204]: E0302 13:01:48.269874 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:48.270811 kubelet[2204]: E0302 13:01:48.270665 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:48.271354 kubelet[2204]: E0302 13:01:48.271286 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:49.274632 kubelet[2204]: E0302 13:01:49.273913 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:49.274632 kubelet[2204]: E0302 13:01:49.274169 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:50.054834 kubelet[2204]: E0302 13:01:50.050224 2204 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:01:50.054834 kubelet[2204]: E0302 13:01:50.050677 2204 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:50.773902 kubelet[2204]: E0302 13:01:50.773802 2204 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:01:50.968109 kubelet[2204]: I0302 13:01:50.967906 2204 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:01:50.968109 kubelet[2204]: E0302 13:01:50.967961 2204 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:01:50.979661 kubelet[2204]: I0302 13:01:50.978721 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:51.245624 kubelet[2204]: E0302 13:01:51.244475 2204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:51.245624 kubelet[2204]: I0302 13:01:51.244532 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:51.262790 kubelet[2204]: E0302 13:01:51.260806 2204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:51.262790 kubelet[2204]: I0302 13:01:51.260860 2204 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:51.264947 kubelet[2204]: I0302 13:01:51.263318 2204 apiserver.go:52] "Watching apiserver" Mar 2 13:01:51.273619 kubelet[2204]: E0302 13:01:51.273525 2204 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:51.380087 kubelet[2204]: I0302 13:01:51.379976 2204 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:01:53.888429 systemd[1]: Reloading requested from client PID 2499 ('systemctl') (unit session-7.scope)... Mar 2 13:01:53.888495 systemd[1]: Reloading... Mar 2 13:01:54.058694 zram_generator::config[2541]: No configuration found. Mar 2 13:01:54.332264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:01:54.523089 systemd[1]: Reloading finished in 633 ms. Mar 2 13:01:54.589527 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:54.607495 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:01:54.607941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:54.608026 systemd[1]: kubelet.service: Consumed 9.886s CPU time, 129.0M memory peak, 0B memory swap peak. Mar 2 13:01:54.620286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:01:55.102017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:01:55.103671 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:01:55.179750 kubelet[2583]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:01:55.179750 kubelet[2583]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:01:55.180266 kubelet[2583]: I0302 13:01:55.179784 2583 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:01:55.202824 kubelet[2583]: I0302 13:01:55.202683 2583 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 13:01:55.202974 kubelet[2583]: I0302 13:01:55.202887 2583 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:01:55.203022 kubelet[2583]: I0302 13:01:55.203009 2583 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:01:55.203181 kubelet[2583]: I0302 13:01:55.203127 2583 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:01:55.206354 kubelet[2583]: I0302 13:01:55.206279 2583 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:01:55.219686 kubelet[2583]: I0302 13:01:55.218974 2583 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:01:55.226363 kubelet[2583]: I0302 13:01:55.225974 2583 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:01:55.238025 kubelet[2583]: E0302 13:01:55.237951 2583 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:01:55.238025 kubelet[2583]: I0302 13:01:55.238056 2583 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:01:55.257155 kubelet[2583]: I0302 13:01:55.256977 2583 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:01:55.258874 kubelet[2583]: I0302 13:01:55.257550 2583 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:01:55.258874 kubelet[2583]: I0302 13:01:55.257645 2583 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:01:55.258874 kubelet[2583]: I0302 13:01:55.257853 2583 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:01:55.258874 kubelet[2583]: I0302 13:01:55.257862 2583 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 13:01:55.259163 kubelet[2583]: I0302 13:01:55.257885 2583 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:01:55.259163 kubelet[2583]: I0302 13:01:55.258227 2583 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:01:55.259163 kubelet[2583]: I0302 13:01:55.258443 2583 kubelet.go:475] "Attempting to sync node with API server" Mar 2 13:01:55.259163 kubelet[2583]: I0302 13:01:55.258461 2583 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:01:55.259163 kubelet[2583]: I0302 13:01:55.258528 2583 kubelet.go:387] "Adding apiserver pod source" Mar 2 13:01:55.259163 kubelet[2583]: I0302 13:01:55.258650 2583 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:01:55.268622 kubelet[2583]: I0302 13:01:55.266198 2583 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:01:55.268622 kubelet[2583]: I0302 13:01:55.266836 2583 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:01:55.268622 kubelet[2583]: I0302 13:01:55.266861 2583 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:01:55.276008 kubelet[2583]: I0302 13:01:55.273785 2583 server.go:1262] "Started kubelet" Mar 2 13:01:55.280064 kubelet[2583]: I0302 13:01:55.279917 2583 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:01:55.281327 kubelet[2583]: I0302 13:01:55.281126 2583 server.go:310] "Adding debug handlers to kubelet server" Mar 2 13:01:55.282210 kubelet[2583]: I0302 13:01:55.281861 2583 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:01:55.282210 kubelet[2583]: I0302 13:01:55.282002 2583 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:01:55.282802 kubelet[2583]: I0302 13:01:55.282546 2583 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:01:55.289661 kubelet[2583]: I0302 13:01:55.286956 2583 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:01:55.292944 kubelet[2583]: I0302 13:01:55.292876 2583 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:01:55.295818 kubelet[2583]: E0302 13:01:55.295757 2583 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:01:55.297056 kubelet[2583]: I0302 13:01:55.296974 2583 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 13:01:55.297225 kubelet[2583]: I0302 13:01:55.297131 2583 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:01:55.298957 kubelet[2583]: I0302 13:01:55.298923 2583 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:01:55.299214 kubelet[2583]: I0302 13:01:55.299094 2583 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:01:55.300808 kubelet[2583]: I0302 13:01:55.300766 2583 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:01:55.305005 kubelet[2583]: I0302 13:01:55.304892 2583 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:01:55.333631 kubelet[2583]: I0302 13:01:55.333534 2583 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:01:55.338844 kubelet[2583]: I0302 13:01:55.338775 2583 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:01:55.338844 kubelet[2583]: I0302 13:01:55.338828 2583 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 13:01:55.338844 kubelet[2583]: I0302 13:01:55.338857 2583 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 13:01:55.339009 kubelet[2583]: E0302 13:01:55.338914 2583 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368352 2583 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368374 2583 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368400 2583 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368659 2583 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368672 2583 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368690 2583 policy_none.go:49] "None policy: Start" Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368699 2583 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:01:55.368753 kubelet[2583]: I0302 13:01:55.368711 2583 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:01:55.368995 kubelet[2583]: I0302 13:01:55.368800 2583 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 13:01:55.368995 kubelet[2583]: I0302 13:01:55.368812 2583 policy_none.go:47] "Start" Mar 2 13:01:55.400725 kubelet[2583]: E0302 13:01:55.400390 2583 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:01:55.425364 kubelet[2583]: I0302 13:01:55.422631 2583 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:01:55.425364 kubelet[2583]: I0302 13:01:55.422739 2583 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:01:55.425364 kubelet[2583]: I0302 13:01:55.425202 2583 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:01:55.429632 kubelet[2583]: E0302 13:01:55.429468 2583 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:01:55.502804 kubelet[2583]: I0302 13:01:55.501635 2583 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:55.508213 kubelet[2583]: I0302 13:01:55.504688 2583 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:55.508213 kubelet[2583]: I0302 13:01:55.505174 2583 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:55.535168 kubelet[2583]: I0302 13:01:55.534844 2583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:01:55.545632 kubelet[2583]: I0302 13:01:55.545452 2583 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 13:01:55.545632 kubelet[2583]: I0302 13:01:55.545517 2583 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:01:55.602139 kubelet[2583]: I0302 13:01:55.602094 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1019f87bbbe55d28d84ffb21a6dac61-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1019f87bbbe55d28d84ffb21a6dac61\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:55.602463 kubelet[2583]: I0302 13:01:55.602331 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1019f87bbbe55d28d84ffb21a6dac61-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a1019f87bbbe55d28d84ffb21a6dac61\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:55.602463 kubelet[2583]: I0302 13:01:55.602361 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:55.602463 kubelet[2583]: I0302 13:01:55.602378 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:55.602463 kubelet[2583]: I0302 13:01:55.602391 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:55.602463 kubelet[2583]: I0302 13:01:55.602405 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:55.602764 kubelet[2583]: I0302 13:01:55.602420 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1019f87bbbe55d28d84ffb21a6dac61-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a1019f87bbbe55d28d84ffb21a6dac61\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:55.602764 kubelet[2583]: I0302 13:01:55.602432 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:01:55.603660 kubelet[2583]: I0302 13:01:55.603596 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:55.841189 kubelet[2583]: E0302 13:01:55.840628 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:55.841189 kubelet[2583]: E0302 13:01:55.840621 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:55.900930 kubelet[2583]: E0302 13:01:55.900505 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:56.262550 kubelet[2583]: I0302 13:01:56.262323 2583 apiserver.go:52] "Watching apiserver" Mar 2 13:01:56.297966 kubelet[2583]: I0302 13:01:56.297849 2583 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:01:56.353813 kubelet[2583]: E0302 13:01:56.353666 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:56.353813 kubelet[2583]: I0302 13:01:56.353769 2583 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:56.354020 kubelet[2583]: I0302 13:01:56.353914 2583 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:56.362409 kubelet[2583]: E0302 13:01:56.362340 2583 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 13:01:56.362545 kubelet[2583]: E0302 13:01:56.362499 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:56.365914 kubelet[2583]: E0302 13:01:56.365879 2583 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 13:01:56.366715 kubelet[2583]: E0302 13:01:56.366596 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:56.395812 kubelet[2583]: I0302 13:01:56.395711 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.395666371 podStartE2EDuration="1.395666371s" podCreationTimestamp="2026-03-02 13:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:56.395286414 +0000 UTC m=+1.275467971" watchObservedRunningTime="2026-03-02 13:01:56.395666371 +0000 UTC m=+1.275847898" Mar 2 13:01:56.405466 kubelet[2583]: I0302 13:01:56.405389 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.405368954 podStartE2EDuration="1.405368954s" podCreationTimestamp="2026-03-02 13:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:56.404363631 +0000 UTC m=+1.284545158" watchObservedRunningTime="2026-03-02 13:01:56.405368954 +0000 UTC m=+1.285550480" Mar 2 13:01:56.418524 kubelet[2583]: I0302 13:01:56.418446 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.418425252 podStartE2EDuration="1.418425252s" podCreationTimestamp="2026-03-02 13:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:01:56.418248032 +0000 UTC m=+1.298429559" watchObservedRunningTime="2026-03-02 13:01:56.418425252 +0000 UTC m=+1.298606779" Mar 2 13:01:57.355740 kubelet[2583]: E0302 13:01:57.355658 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:57.355740 kubelet[2583]: E0302 13:01:57.355733 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:59.774307 kubelet[2583]: E0302 13:01:59.774258 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:00.257839 kubelet[2583]: I0302 13:02:00.257737 2583 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:02:00.258280 containerd[1464]: time="2026-03-02T13:02:00.258224648Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:02:00.259073 kubelet[2583]: I0302 13:02:00.258559 2583 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:02:00.363004 kubelet[2583]: E0302 13:02:00.362909 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:01.082366 systemd[1]: Created slice kubepods-besteffort-pod23941354_4671_437d_b1c0_722f4e1f0286.slice - libcontainer container kubepods-besteffort-pod23941354_4671_437d_b1c0_722f4e1f0286.slice. Mar 2 13:02:01.150690 kubelet[2583]: I0302 13:02:01.150550 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23941354-4671-437d-b1c0-722f4e1f0286-lib-modules\") pod \"kube-proxy-z59j5\" (UID: \"23941354-4671-437d-b1c0-722f4e1f0286\") " pod="kube-system/kube-proxy-z59j5" Mar 2 13:02:01.150690 kubelet[2583]: I0302 13:02:01.150677 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23941354-4671-437d-b1c0-722f4e1f0286-kube-proxy\") pod \"kube-proxy-z59j5\" (UID: \"23941354-4671-437d-b1c0-722f4e1f0286\") " pod="kube-system/kube-proxy-z59j5" Mar 2 13:02:01.151364 kubelet[2583]: I0302 13:02:01.150711 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23941354-4671-437d-b1c0-722f4e1f0286-xtables-lock\") pod \"kube-proxy-z59j5\" (UID: \"23941354-4671-437d-b1c0-722f4e1f0286\") " pod="kube-system/kube-proxy-z59j5" Mar 2 13:02:01.151364 kubelet[2583]: I0302 13:02:01.150735 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24tk6\" (UniqueName: \"kubernetes.io/projected/23941354-4671-437d-b1c0-722f4e1f0286-kube-api-access-24tk6\") pod \"kube-proxy-z59j5\" (UID: \"23941354-4671-437d-b1c0-722f4e1f0286\") " pod="kube-system/kube-proxy-z59j5" Mar 2 13:02:01.403238 kubelet[2583]: E0302 13:02:01.402225 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:01.403334 containerd[1464]: time="2026-03-02T13:02:01.403101382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z59j5,Uid:23941354-4671-437d-b1c0-722f4e1f0286,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:01.407501 systemd[1]: Created slice kubepods-besteffort-pod9934496d_15b9_46c2_9dd5_d0cc16bfdc75.slice - libcontainer container kubepods-besteffort-pod9934496d_15b9_46c2_9dd5_d0cc16bfdc75.slice. Mar 2 13:02:01.439522 containerd[1464]: time="2026-03-02T13:02:01.439200498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:01.439522 containerd[1464]: time="2026-03-02T13:02:01.439256342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:01.439522 containerd[1464]: time="2026-03-02T13:02:01.439355707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:01.439743 containerd[1464]: time="2026-03-02T13:02:01.439674271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:01.453229 kubelet[2583]: I0302 13:02:01.453106 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9934496d-15b9-46c2-9dd5-d0cc16bfdc75-var-lib-calico\") pod \"tigera-operator-85979684d8-rgjrp\" (UID: \"9934496d-15b9-46c2-9dd5-d0cc16bfdc75\") " pod="tigera-operator/tigera-operator-85979684d8-rgjrp" Mar 2 13:02:01.453229 kubelet[2583]: I0302 13:02:01.453149 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7rbr\" (UniqueName: \"kubernetes.io/projected/9934496d-15b9-46c2-9dd5-d0cc16bfdc75-kube-api-access-w7rbr\") pod \"tigera-operator-85979684d8-rgjrp\" (UID: \"9934496d-15b9-46c2-9dd5-d0cc16bfdc75\") " pod="tigera-operator/tigera-operator-85979684d8-rgjrp" Mar 2 13:02:01.467850 systemd[1]: Started cri-containerd-c8555579f9654234cfd18fba3fec5b62c6693f2a0e5297b6c5866d68ad9d72da.scope - libcontainer container c8555579f9654234cfd18fba3fec5b62c6693f2a0e5297b6c5866d68ad9d72da. Mar 2 13:02:01.496086 containerd[1464]: time="2026-03-02T13:02:01.495971614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z59j5,Uid:23941354-4671-437d-b1c0-722f4e1f0286,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8555579f9654234cfd18fba3fec5b62c6693f2a0e5297b6c5866d68ad9d72da\"" Mar 2 13:02:01.497499 kubelet[2583]: E0302 13:02:01.497431 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:01.502726 containerd[1464]: time="2026-03-02T13:02:01.502674082Z" level=info msg="CreateContainer within sandbox \"c8555579f9654234cfd18fba3fec5b62c6693f2a0e5297b6c5866d68ad9d72da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:02:01.524964 containerd[1464]: time="2026-03-02T13:02:01.524870499Z" level=info msg="CreateContainer within sandbox \"c8555579f9654234cfd18fba3fec5b62c6693f2a0e5297b6c5866d68ad9d72da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ce8e7c0274bab036b76c428827fc8c7a129ed4f4cbcc14fab1e98d29f29d5d3\"" Mar 2 13:02:01.526869 containerd[1464]: time="2026-03-02T13:02:01.525825168Z" level=info msg="StartContainer for \"5ce8e7c0274bab036b76c428827fc8c7a129ed4f4cbcc14fab1e98d29f29d5d3\"" Mar 2 13:02:01.563939 systemd[1]: Started cri-containerd-5ce8e7c0274bab036b76c428827fc8c7a129ed4f4cbcc14fab1e98d29f29d5d3.scope - libcontainer container 5ce8e7c0274bab036b76c428827fc8c7a129ed4f4cbcc14fab1e98d29f29d5d3. Mar 2 13:02:01.601498 containerd[1464]: time="2026-03-02T13:02:01.601461678Z" level=info msg="StartContainer for \"5ce8e7c0274bab036b76c428827fc8c7a129ed4f4cbcc14fab1e98d29f29d5d3\" returns successfully" Mar 2 13:02:01.715866 containerd[1464]: time="2026-03-02T13:02:01.715667048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-85979684d8-rgjrp,Uid:9934496d-15b9-46c2-9dd5-d0cc16bfdc75,Namespace:tigera-operator,Attempt:0,}" Mar 2 13:02:01.743852 containerd[1464]: time="2026-03-02T13:02:01.743680507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:01.744555 containerd[1464]: time="2026-03-02T13:02:01.744490457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:01.744555 containerd[1464]: time="2026-03-02T13:02:01.744515653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:01.744813 containerd[1464]: time="2026-03-02T13:02:01.744656536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:01.786857 systemd[1]: Started cri-containerd-490aec0f5b425118500e0d7aaf040cb5d4048d44e0632074bb598c32705b63df.scope - libcontainer container 490aec0f5b425118500e0d7aaf040cb5d4048d44e0632074bb598c32705b63df. Mar 2 13:02:01.834538 containerd[1464]: time="2026-03-02T13:02:01.834416432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-85979684d8-rgjrp,Uid:9934496d-15b9-46c2-9dd5-d0cc16bfdc75,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"490aec0f5b425118500e0d7aaf040cb5d4048d44e0632074bb598c32705b63df\"" Mar 2 13:02:01.837236 containerd[1464]: time="2026-03-02T13:02:01.837190871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 13:02:02.372919 kubelet[2583]: E0302 13:02:02.372863 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:02.388343 kubelet[2583]: I0302 13:02:02.388213 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z59j5" podStartSLOduration=1.388196308 podStartE2EDuration="1.388196308s" podCreationTimestamp="2026-03-02 13:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:02:02.388105518 +0000 UTC m=+7.268287064" watchObservedRunningTime="2026-03-02 13:02:02.388196308 +0000 UTC m=+7.268377845" Mar 2 13:02:02.644468 kubelet[2583]: E0302 13:02:02.644277 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:03.036377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988044003.mount: Deactivated successfully. Mar 2 13:02:03.375957 kubelet[2583]: E0302 13:02:03.375873 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:05.842008 kubelet[2583]: E0302 13:02:05.841886 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:06.834909 kubelet[2583]: E0302 13:02:06.834726 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:07.711524 containerd[1464]: time="2026-03-02T13:02:07.711298265Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:07.713253 containerd[1464]: time="2026-03-02T13:02:07.712213337Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=40822719" Mar 2 13:02:07.714202 containerd[1464]: time="2026-03-02T13:02:07.714113931Z" level=info msg="ImageCreate event name:\"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:07.717860 containerd[1464]: time="2026-03-02T13:02:07.717734445Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:07.719324 containerd[1464]: time="2026-03-02T13:02:07.719189964Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"40818714\" in 5.881947006s" Mar 2 13:02:07.719324 containerd[1464]: time="2026-03-02T13:02:07.719269253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\"" Mar 2 13:02:07.741887 containerd[1464]: time="2026-03-02T13:02:07.741800726Z" level=info msg="CreateContainer within sandbox \"490aec0f5b425118500e0d7aaf040cb5d4048d44e0632074bb598c32705b63df\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 13:02:07.757650 containerd[1464]: time="2026-03-02T13:02:07.757558148Z" level=info msg="CreateContainer within sandbox \"490aec0f5b425118500e0d7aaf040cb5d4048d44e0632074bb598c32705b63df\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bb647ec0bc299f0ecdf99a3cd4dcee5b3a927212d52edc0da902a8361f0a7ebb\"" Mar 2 13:02:07.758304 containerd[1464]: time="2026-03-02T13:02:07.758232626Z" level=info msg="StartContainer for \"bb647ec0bc299f0ecdf99a3cd4dcee5b3a927212d52edc0da902a8361f0a7ebb\"" Mar 2 13:02:07.884650 systemd[1]: Started cri-containerd-bb647ec0bc299f0ecdf99a3cd4dcee5b3a927212d52edc0da902a8361f0a7ebb.scope - libcontainer container bb647ec0bc299f0ecdf99a3cd4dcee5b3a927212d52edc0da902a8361f0a7ebb. Mar 2 13:02:07.936850 containerd[1464]: time="2026-03-02T13:02:07.936685698Z" level=info msg="StartContainer for \"bb647ec0bc299f0ecdf99a3cd4dcee5b3a927212d52edc0da902a8361f0a7ebb\" returns successfully" Mar 2 13:02:08.902163 kubelet[2583]: I0302 13:02:08.899906 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-85979684d8-rgjrp" podStartSLOduration=2.015329905 podStartE2EDuration="7.899848116s" podCreationTimestamp="2026-03-02 13:02:01 +0000 UTC" firstStartedPulling="2026-03-02 13:02:01.836633812 +0000 UTC m=+6.716815349" lastFinishedPulling="2026-03-02 13:02:07.721152033 +0000 UTC m=+12.601333560" observedRunningTime="2026-03-02 13:02:08.899629256 +0000 UTC m=+13.779810782" watchObservedRunningTime="2026-03-02 13:02:08.899848116 +0000 UTC m=+13.780029643" Mar 2 13:02:18.111256 sudo[1641]: pam_unix(sudo:session): session closed for user root Mar 2 13:02:18.231199 sshd[1638]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:18.507018 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:58148.service: Deactivated successfully. Mar 2 13:02:18.661457 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:02:18.671662 systemd[1]: session-7.scope: Consumed 26.260s CPU time, 159.8M memory peak, 0B memory swap peak. Mar 2 13:02:18.881483 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:02:19.358369 systemd-logind[1447]: Removed session 7. Mar 2 13:02:22.066034 systemd[1]: Created slice kubepods-besteffort-pod0f7362a5_75d3_44cb_ba4f_557e7fb4b65a.slice - libcontainer container kubepods-besteffort-pod0f7362a5_75d3_44cb_ba4f_557e7fb4b65a.slice. Mar 2 13:02:22.119908 kubelet[2583]: I0302 13:02:22.119445 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f7362a5-75d3-44cb-ba4f-557e7fb4b65a-typha-certs\") pod \"calico-typha-6fd8bdddb9-5rmc4\" (UID: \"0f7362a5-75d3-44cb-ba4f-557e7fb4b65a\") " pod="calico-system/calico-typha-6fd8bdddb9-5rmc4" Mar 2 13:02:22.119908 kubelet[2583]: I0302 13:02:22.119508 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2j7f\" (UniqueName: \"kubernetes.io/projected/0f7362a5-75d3-44cb-ba4f-557e7fb4b65a-kube-api-access-n2j7f\") pod \"calico-typha-6fd8bdddb9-5rmc4\" (UID: \"0f7362a5-75d3-44cb-ba4f-557e7fb4b65a\") " pod="calico-system/calico-typha-6fd8bdddb9-5rmc4" Mar 2 13:02:22.119908 kubelet[2583]: I0302 13:02:22.119789 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f7362a5-75d3-44cb-ba4f-557e7fb4b65a-tigera-ca-bundle\") pod \"calico-typha-6fd8bdddb9-5rmc4\" (UID: \"0f7362a5-75d3-44cb-ba4f-557e7fb4b65a\") " pod="calico-system/calico-typha-6fd8bdddb9-5rmc4" Mar 2 13:02:22.142527 systemd[1]: Created slice kubepods-besteffort-pod2df5c8c3_6cbc_4f87_a63e_40b74974038e.slice - libcontainer container kubepods-besteffort-pod2df5c8c3_6cbc_4f87_a63e_40b74974038e.slice. Mar 2 13:02:22.220598 kubelet[2583]: I0302 13:02:22.220477 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-bpffs\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.220598 kubelet[2583]: I0302 13:02:22.220550 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-var-lib-calico\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222141 kubelet[2583]: I0302 13:02:22.220601 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-flexvol-driver-host\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222141 kubelet[2583]: I0302 13:02:22.220635 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-cni-log-dir\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222141 kubelet[2583]: I0302 13:02:22.220650 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-lib-modules\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222141 kubelet[2583]: I0302 13:02:22.220665 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-nodeproc\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222141 kubelet[2583]: I0302 13:02:22.220681 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2df5c8c3-6cbc-4f87-a63e-40b74974038e-tigera-ca-bundle\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222434 kubelet[2583]: I0302 13:02:22.220707 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-var-run-calico\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222434 kubelet[2583]: I0302 13:02:22.220730 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-cni-bin-dir\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222434 kubelet[2583]: I0302 13:02:22.220782 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-policysync\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222434 kubelet[2583]: I0302 13:02:22.220806 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-sys-fs\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222434 kubelet[2583]: I0302 13:02:22.220858 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-cni-net-dir\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222744 kubelet[2583]: I0302 13:02:22.221672 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2df5c8c3-6cbc-4f87-a63e-40b74974038e-node-certs\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222744 kubelet[2583]: I0302 13:02:22.221705 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2df5c8c3-6cbc-4f87-a63e-40b74974038e-xtables-lock\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.222744 kubelet[2583]: I0302 13:02:22.221755 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpkrk\" (UniqueName: \"kubernetes.io/projected/2df5c8c3-6cbc-4f87-a63e-40b74974038e-kube-api-access-cpkrk\") pod \"calico-node-2zj94\" (UID: \"2df5c8c3-6cbc-4f87-a63e-40b74974038e\") " pod="calico-system/calico-node-2zj94" Mar 2 13:02:22.257053 kubelet[2583]: E0302 13:02:22.256694 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:22.346693 kubelet[2583]: I0302 13:02:22.346339 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c416f0d5-dcd3-417c-ab45-85ca0752b4e9-varrun\") pod \"csi-node-driver-j7c7n\" (UID: \"c416f0d5-dcd3-417c-ab45-85ca0752b4e9\") " pod="calico-system/csi-node-driver-j7c7n" Mar 2 13:02:22.346693 kubelet[2583]: I0302 13:02:22.346698 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwf2x\" (UniqueName: \"kubernetes.io/projected/c416f0d5-dcd3-417c-ab45-85ca0752b4e9-kube-api-access-cwf2x\") pod \"csi-node-driver-j7c7n\" (UID: \"c416f0d5-dcd3-417c-ab45-85ca0752b4e9\") " pod="calico-system/csi-node-driver-j7c7n" Mar 2 13:02:22.347676 kubelet[2583]: I0302 13:02:22.346797 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c416f0d5-dcd3-417c-ab45-85ca0752b4e9-kubelet-dir\") pod \"csi-node-driver-j7c7n\" (UID: \"c416f0d5-dcd3-417c-ab45-85ca0752b4e9\") " pod="calico-system/csi-node-driver-j7c7n" Mar 2 13:02:22.347676 kubelet[2583]: I0302 13:02:22.347006 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c416f0d5-dcd3-417c-ab45-85ca0752b4e9-registration-dir\") pod \"csi-node-driver-j7c7n\" (UID: \"c416f0d5-dcd3-417c-ab45-85ca0752b4e9\") " pod="calico-system/csi-node-driver-j7c7n" Mar 2 13:02:22.347676 kubelet[2583]: I0302 13:02:22.347037 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c416f0d5-dcd3-417c-ab45-85ca0752b4e9-socket-dir\") pod \"csi-node-driver-j7c7n\" (UID: \"c416f0d5-dcd3-417c-ab45-85ca0752b4e9\") " pod="calico-system/csi-node-driver-j7c7n" Mar 2 13:02:22.357037 kubelet[2583]: E0302 13:02:22.356962 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.357037 kubelet[2583]: W0302 13:02:22.357027 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.357214 kubelet[2583]: E0302 13:02:22.357077 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.357478 kubelet[2583]: E0302 13:02:22.357437 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.357478 kubelet[2583]: W0302 13:02:22.357467 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.357542 kubelet[2583]: E0302 13:02:22.357483 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.361793 kubelet[2583]: E0302 13:02:22.361739 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.361793 kubelet[2583]: W0302 13:02:22.361776 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.361793 kubelet[2583]: E0302 13:02:22.361795 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.374449 kubelet[2583]: E0302 13:02:22.374391 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:22.379367 containerd[1464]: time="2026-03-02T13:02:22.377160320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd8bdddb9-5rmc4,Uid:0f7362a5-75d3-44cb-ba4f-557e7fb4b65a,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:22.388408 kubelet[2583]: E0302 13:02:22.388357 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.388408 kubelet[2583]: W0302 13:02:22.388399 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.388408 kubelet[2583]: E0302 13:02:22.388426 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.428439 containerd[1464]: time="2026-03-02T13:02:22.428238319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:22.428721 containerd[1464]: time="2026-03-02T13:02:22.428465823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:22.428721 containerd[1464]: time="2026-03-02T13:02:22.428497311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:22.428914 containerd[1464]: time="2026-03-02T13:02:22.428737758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:22.448096 kubelet[2583]: E0302 13:02:22.448004 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.448096 kubelet[2583]: W0302 13:02:22.448047 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.448096 kubelet[2583]: E0302 13:02:22.448071 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.449074 kubelet[2583]: E0302 13:02:22.449044 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.449074 kubelet[2583]: W0302 13:02:22.449060 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.449074 kubelet[2583]: E0302 13:02:22.449071 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.450513 kubelet[2583]: E0302 13:02:22.450390 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.450513 kubelet[2583]: W0302 13:02:22.450407 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.450513 kubelet[2583]: E0302 13:02:22.450419 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.450958 containerd[1464]: time="2026-03-02T13:02:22.450911995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2zj94,Uid:2df5c8c3-6cbc-4f87-a63e-40b74974038e,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:22.451157 kubelet[2583]: E0302 13:02:22.451142 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.451157 kubelet[2583]: W0302 13:02:22.451155 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.451224 kubelet[2583]: E0302 13:02:22.451168 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.451744 kubelet[2583]: E0302 13:02:22.451692 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.451744 kubelet[2583]: W0302 13:02:22.451736 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.451838 kubelet[2583]: E0302 13:02:22.451752 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.452659 kubelet[2583]: E0302 13:02:22.452510 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.452659 kubelet[2583]: W0302 13:02:22.452543 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.452659 kubelet[2583]: E0302 13:02:22.452624 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.453804 kubelet[2583]: E0302 13:02:22.453632 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.453804 kubelet[2583]: W0302 13:02:22.453645 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.453804 kubelet[2583]: E0302 13:02:22.453655 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.454219 kubelet[2583]: E0302 13:02:22.454092 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.454219 kubelet[2583]: W0302 13:02:22.454105 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.454219 kubelet[2583]: E0302 13:02:22.454114 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.454804 kubelet[2583]: E0302 13:02:22.454774 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.454804 kubelet[2583]: W0302 13:02:22.454798 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.454889 kubelet[2583]: E0302 13:02:22.454810 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.455744 kubelet[2583]: E0302 13:02:22.455686 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.455744 kubelet[2583]: W0302 13:02:22.455718 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.455744 kubelet[2583]: E0302 13:02:22.455729 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.456139 kubelet[2583]: E0302 13:02:22.456084 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.456139 kubelet[2583]: W0302 13:02:22.456111 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.456139 kubelet[2583]: E0302 13:02:22.456122 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.456537 kubelet[2583]: E0302 13:02:22.456487 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.456537 kubelet[2583]: W0302 13:02:22.456519 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.456537 kubelet[2583]: E0302 13:02:22.456530 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.457043 kubelet[2583]: E0302 13:02:22.456963 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.457043 kubelet[2583]: W0302 13:02:22.457017 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.457043 kubelet[2583]: E0302 13:02:22.457028 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.457526 kubelet[2583]: E0302 13:02:22.457468 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.457526 kubelet[2583]: W0302 13:02:22.457481 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.457526 kubelet[2583]: E0302 13:02:22.457490 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.459299 kubelet[2583]: E0302 13:02:22.458193 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.459299 kubelet[2583]: W0302 13:02:22.458202 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.459299 kubelet[2583]: E0302 13:02:22.458215 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.459299 kubelet[2583]: E0302 13:02:22.458536 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.459299 kubelet[2583]: W0302 13:02:22.458545 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.459299 kubelet[2583]: E0302 13:02:22.458554 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.459299 kubelet[2583]: E0302 13:02:22.459216 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.459299 kubelet[2583]: W0302 13:02:22.459227 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.459299 kubelet[2583]: E0302 13:02:22.459236 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.459664 kubelet[2583]: E0302 13:02:22.459509 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.459664 kubelet[2583]: W0302 13:02:22.459553 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.459664 kubelet[2583]: E0302 13:02:22.459598 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.460853 kubelet[2583]: E0302 13:02:22.459898 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.460853 kubelet[2583]: W0302 13:02:22.460213 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.460853 kubelet[2583]: E0302 13:02:22.460227 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.461746 kubelet[2583]: E0302 13:02:22.461693 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.461746 kubelet[2583]: W0302 13:02:22.461724 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.461746 kubelet[2583]: E0302 13:02:22.461735 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.462392 kubelet[2583]: E0302 13:02:22.462332 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.462392 kubelet[2583]: W0302 13:02:22.462368 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.462392 kubelet[2583]: E0302 13:02:22.462380 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.462774 kubelet[2583]: E0302 13:02:22.462744 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.462774 kubelet[2583]: W0302 13:02:22.462767 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.462925 kubelet[2583]: E0302 13:02:22.462778 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.463488 kubelet[2583]: E0302 13:02:22.463437 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.463488 kubelet[2583]: W0302 13:02:22.463466 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.463488 kubelet[2583]: E0302 13:02:22.463477 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.463868 kubelet[2583]: E0302 13:02:22.463815 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.463868 kubelet[2583]: W0302 13:02:22.463842 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.463868 kubelet[2583]: E0302 13:02:22.463853 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.466652 kubelet[2583]: E0302 13:02:22.464448 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.466652 kubelet[2583]: W0302 13:02:22.464460 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.466652 kubelet[2583]: E0302 13:02:22.464470 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.466946 systemd[1]: Started cri-containerd-86d76e504283a9674d4ff8495883266927689e83ebfa2b6c9b6c00a46b4f5e45.scope - libcontainer container 86d76e504283a9674d4ff8495883266927689e83ebfa2b6c9b6c00a46b4f5e45. Mar 2 13:02:22.477660 kubelet[2583]: E0302 13:02:22.477617 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:22.477660 kubelet[2583]: W0302 13:02:22.477653 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:22.477760 kubelet[2583]: E0302 13:02:22.477680 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:22.495928 containerd[1464]: time="2026-03-02T13:02:22.495397926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:22.495928 containerd[1464]: time="2026-03-02T13:02:22.495510727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:22.495928 containerd[1464]: time="2026-03-02T13:02:22.495523060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:22.495928 containerd[1464]: time="2026-03-02T13:02:22.495759800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:22.522813 systemd[1]: Started cri-containerd-13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1.scope - libcontainer container 13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1. Mar 2 13:02:22.530165 containerd[1464]: time="2026-03-02T13:02:22.530090569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fd8bdddb9-5rmc4,Uid:0f7362a5-75d3-44cb-ba4f-557e7fb4b65a,Namespace:calico-system,Attempt:0,} returns sandbox id \"86d76e504283a9674d4ff8495883266927689e83ebfa2b6c9b6c00a46b4f5e45\"" Mar 2 13:02:22.531098 kubelet[2583]: E0302 13:02:22.531036 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:22.534859 containerd[1464]: time="2026-03-02T13:02:22.534481067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 13:02:22.564943 containerd[1464]: time="2026-03-02T13:02:22.564766187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2zj94,Uid:2df5c8c3-6cbc-4f87-a63e-40b74974038e,Namespace:calico-system,Attempt:0,} returns sandbox id \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\"" Mar 2 13:02:24.340090 kubelet[2583]: E0302 13:02:24.339932 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:26.340204 kubelet[2583]: E0302 13:02:26.340128 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:26.690316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742329899.mount: Deactivated successfully. Mar 2 13:02:28.340173 kubelet[2583]: E0302 13:02:28.340082 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:28.370788 containerd[1464]: time="2026-03-02T13:02:28.370690667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:28.371942 containerd[1464]: time="2026-03-02T13:02:28.371851082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=36094696" Mar 2 13:02:28.373557 containerd[1464]: time="2026-03-02T13:02:28.373477250Z" level=info msg="ImageCreate event name:\"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:28.376082 containerd[1464]: time="2026-03-02T13:02:28.376040660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:28.376935 containerd[1464]: time="2026-03-02T13:02:28.376886799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"36094550\" in 5.842341693s" Mar 2 13:02:28.376935 containerd[1464]: time="2026-03-02T13:02:28.376938856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\"" Mar 2 13:02:28.378445 containerd[1464]: time="2026-03-02T13:02:28.378388155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 13:02:28.399385 containerd[1464]: time="2026-03-02T13:02:28.399344356Z" level=info msg="CreateContainer within sandbox \"86d76e504283a9674d4ff8495883266927689e83ebfa2b6c9b6c00a46b4f5e45\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 13:02:28.418712 containerd[1464]: time="2026-03-02T13:02:28.418629437Z" level=info msg="CreateContainer within sandbox \"86d76e504283a9674d4ff8495883266927689e83ebfa2b6c9b6c00a46b4f5e45\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1fcc19e2d26a1a4540ec89585fdbb7b30b9ecb4e679d75fe5de9580f6889e277\"" Mar 2 13:02:28.419620 containerd[1464]: time="2026-03-02T13:02:28.419529372Z" level=info msg="StartContainer for \"1fcc19e2d26a1a4540ec89585fdbb7b30b9ecb4e679d75fe5de9580f6889e277\"" Mar 2 13:02:28.498778 systemd[1]: Started cri-containerd-1fcc19e2d26a1a4540ec89585fdbb7b30b9ecb4e679d75fe5de9580f6889e277.scope - libcontainer container 1fcc19e2d26a1a4540ec89585fdbb7b30b9ecb4e679d75fe5de9580f6889e277. Mar 2 13:02:28.554382 containerd[1464]: time="2026-03-02T13:02:28.554245956Z" level=info msg="StartContainer for \"1fcc19e2d26a1a4540ec89585fdbb7b30b9ecb4e679d75fe5de9580f6889e277\" returns successfully" Mar 2 13:02:28.597020 kubelet[2583]: E0302 13:02:28.596823 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:28.664029 kubelet[2583]: E0302 13:02:28.663896 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.664029 kubelet[2583]: W0302 13:02:28.663998 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.664029 kubelet[2583]: E0302 13:02:28.664035 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.664448 kubelet[2583]: E0302 13:02:28.664423 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.664448 kubelet[2583]: W0302 13:02:28.664436 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.664499 kubelet[2583]: E0302 13:02:28.664453 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.665063 kubelet[2583]: E0302 13:02:28.664997 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.665063 kubelet[2583]: W0302 13:02:28.665027 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.665063 kubelet[2583]: E0302 13:02:28.665043 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.665652 kubelet[2583]: E0302 13:02:28.665542 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.665652 kubelet[2583]: W0302 13:02:28.665640 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.665707 kubelet[2583]: E0302 13:02:28.665657 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.666074 kubelet[2583]: E0302 13:02:28.666043 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.666074 kubelet[2583]: W0302 13:02:28.666073 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.666171 kubelet[2583]: E0302 13:02:28.666088 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.666435 kubelet[2583]: E0302 13:02:28.666394 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.666435 kubelet[2583]: W0302 13:02:28.666418 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.666435 kubelet[2583]: E0302 13:02:28.666429 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.666798 kubelet[2583]: E0302 13:02:28.666747 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.666798 kubelet[2583]: W0302 13:02:28.666779 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.666798 kubelet[2583]: E0302 13:02:28.666789 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.667151 kubelet[2583]: E0302 13:02:28.667108 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.667151 kubelet[2583]: W0302 13:02:28.667123 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.667151 kubelet[2583]: E0302 13:02:28.667133 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.667467 kubelet[2583]: E0302 13:02:28.667435 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.667467 kubelet[2583]: W0302 13:02:28.667465 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.667467 kubelet[2583]: E0302 13:02:28.667483 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.667896 kubelet[2583]: E0302 13:02:28.667869 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.667896 kubelet[2583]: W0302 13:02:28.667896 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.667946 kubelet[2583]: E0302 13:02:28.667910 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.668392 kubelet[2583]: E0302 13:02:28.668362 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.668392 kubelet[2583]: W0302 13:02:28.668390 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.668480 kubelet[2583]: E0302 13:02:28.668405 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.670318 kubelet[2583]: E0302 13:02:28.669942 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.670318 kubelet[2583]: W0302 13:02:28.669999 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.670318 kubelet[2583]: E0302 13:02:28.670016 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.670448 kubelet[2583]: E0302 13:02:28.670391 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.670448 kubelet[2583]: W0302 13:02:28.670406 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.670448 kubelet[2583]: E0302 13:02:28.670418 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.671098 kubelet[2583]: E0302 13:02:28.671034 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.671098 kubelet[2583]: W0302 13:02:28.671055 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.671098 kubelet[2583]: E0302 13:02:28.671071 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.671737 kubelet[2583]: E0302 13:02:28.671678 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.671737 kubelet[2583]: W0302 13:02:28.671713 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.671737 kubelet[2583]: E0302 13:02:28.671727 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.707811 kubelet[2583]: E0302 13:02:28.707748 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.707811 kubelet[2583]: W0302 13:02:28.707788 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.707811 kubelet[2583]: E0302 13:02:28.707814 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.710060 kubelet[2583]: E0302 13:02:28.709859 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.710060 kubelet[2583]: W0302 13:02:28.709886 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.710060 kubelet[2583]: E0302 13:02:28.709904 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.710620 kubelet[2583]: E0302 13:02:28.710533 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.710620 kubelet[2583]: W0302 13:02:28.710614 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.710695 kubelet[2583]: E0302 13:02:28.710628 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.712215 kubelet[2583]: E0302 13:02:28.712168 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.712269 kubelet[2583]: W0302 13:02:28.712220 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.712269 kubelet[2583]: E0302 13:02:28.712241 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.713242 kubelet[2583]: E0302 13:02:28.713190 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.713242 kubelet[2583]: W0302 13:02:28.713231 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.713318 kubelet[2583]: E0302 13:02:28.713248 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.713757 kubelet[2583]: E0302 13:02:28.713711 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.713757 kubelet[2583]: W0302 13:02:28.713739 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.713831 kubelet[2583]: E0302 13:02:28.713798 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.715121 kubelet[2583]: E0302 13:02:28.714491 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.715121 kubelet[2583]: W0302 13:02:28.714512 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.715121 kubelet[2583]: E0302 13:02:28.714525 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.715254 kubelet[2583]: E0302 13:02:28.715147 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.715254 kubelet[2583]: W0302 13:02:28.715164 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.715254 kubelet[2583]: E0302 13:02:28.715177 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.716006 kubelet[2583]: E0302 13:02:28.715941 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.716006 kubelet[2583]: W0302 13:02:28.716001 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.716115 kubelet[2583]: E0302 13:02:28.716046 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.717165 kubelet[2583]: E0302 13:02:28.717088 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.717165 kubelet[2583]: W0302 13:02:28.717154 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.717225 kubelet[2583]: E0302 13:02:28.717170 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.717982 kubelet[2583]: E0302 13:02:28.717906 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.717982 kubelet[2583]: W0302 13:02:28.717944 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.718075 kubelet[2583]: E0302 13:02:28.717990 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.719230 kubelet[2583]: E0302 13:02:28.719101 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.719230 kubelet[2583]: W0302 13:02:28.719143 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.719230 kubelet[2583]: E0302 13:02:28.719160 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.720021 kubelet[2583]: E0302 13:02:28.719945 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.720021 kubelet[2583]: W0302 13:02:28.720016 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.720083 kubelet[2583]: E0302 13:02:28.720034 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.722523 kubelet[2583]: E0302 13:02:28.722403 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.722671 kubelet[2583]: W0302 13:02:28.722625 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.722761 kubelet[2583]: E0302 13:02:28.722713 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.723998 kubelet[2583]: E0302 13:02:28.723935 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.723998 kubelet[2583]: W0302 13:02:28.723995 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.724074 kubelet[2583]: E0302 13:02:28.724012 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.724555 kubelet[2583]: E0302 13:02:28.724510 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.724555 kubelet[2583]: W0302 13:02:28.724547 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.724668 kubelet[2583]: E0302 13:02:28.724640 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.725942 kubelet[2583]: E0302 13:02:28.725881 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.725942 kubelet[2583]: W0302 13:02:28.725923 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.725942 kubelet[2583]: E0302 13:02:28.725937 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:28.727680 kubelet[2583]: E0302 13:02:28.727607 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:28.727734 kubelet[2583]: W0302 13:02:28.727688 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:28.727734 kubelet[2583]: E0302 13:02:28.727705 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.599895 kubelet[2583]: I0302 13:02:29.599849 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:02:29.600756 kubelet[2583]: E0302 13:02:29.600330 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:29.679248 containerd[1464]: time="2026-03-02T13:02:29.679171844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:29.680057 kubelet[2583]: E0302 13:02:29.679672 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.680057 kubelet[2583]: W0302 13:02:29.679692 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.680057 kubelet[2583]: E0302 13:02:29.679716 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.680325 containerd[1464]: time="2026-03-02T13:02:29.680151629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4630152" Mar 2 13:02:29.680453 kubelet[2583]: E0302 13:02:29.680399 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.680453 kubelet[2583]: W0302 13:02:29.680435 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.680552 kubelet[2583]: E0302 13:02:29.680454 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.680830 kubelet[2583]: E0302 13:02:29.680803 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.680830 kubelet[2583]: W0302 13:02:29.680824 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.680927 kubelet[2583]: E0302 13:02:29.680835 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.681264 kubelet[2583]: E0302 13:02:29.681237 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.681264 kubelet[2583]: W0302 13:02:29.681263 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.681381 kubelet[2583]: E0302 13:02:29.681277 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.681709 containerd[1464]: time="2026-03-02T13:02:29.681638267Z" level=info msg="ImageCreate event name:\"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:29.681751 kubelet[2583]: E0302 13:02:29.681701 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.681751 kubelet[2583]: W0302 13:02:29.681715 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.681751 kubelet[2583]: E0302 13:02:29.681731 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.682224 kubelet[2583]: E0302 13:02:29.682187 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.682224 kubelet[2583]: W0302 13:02:29.682203 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.682224 kubelet[2583]: E0302 13:02:29.682217 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.682614 kubelet[2583]: E0302 13:02:29.682550 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.682662 kubelet[2583]: W0302 13:02:29.682627 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.682662 kubelet[2583]: E0302 13:02:29.682641 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.682933 kubelet[2583]: E0302 13:02:29.682907 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.682933 kubelet[2583]: W0302 13:02:29.682923 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.682933 kubelet[2583]: E0302 13:02:29.682933 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.683288 kubelet[2583]: E0302 13:02:29.683265 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.683362 kubelet[2583]: W0302 13:02:29.683288 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.683362 kubelet[2583]: E0302 13:02:29.683300 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.683718 kubelet[2583]: E0302 13:02:29.683693 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.683718 kubelet[2583]: W0302 13:02:29.683716 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.683811 kubelet[2583]: E0302 13:02:29.683729 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.684106 kubelet[2583]: E0302 13:02:29.684079 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.684106 kubelet[2583]: W0302 13:02:29.684100 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.684195 kubelet[2583]: E0302 13:02:29.684113 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.685475 kubelet[2583]: E0302 13:02:29.684458 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.685475 kubelet[2583]: W0302 13:02:29.684473 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.685475 kubelet[2583]: E0302 13:02:29.684483 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.685475 kubelet[2583]: E0302 13:02:29.684942 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.685475 kubelet[2583]: W0302 13:02:29.684952 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.685475 kubelet[2583]: E0302 13:02:29.684997 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.685475 kubelet[2583]: E0302 13:02:29.685277 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.685475 kubelet[2583]: W0302 13:02:29.685287 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.685475 kubelet[2583]: E0302 13:02:29.685297 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.685734 containerd[1464]: time="2026-03-02T13:02:29.684634891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:29.685734 containerd[1464]: time="2026-03-02T13:02:29.685339438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"6186157\" in 1.306924363s" Mar 2 13:02:29.685734 containerd[1464]: time="2026-03-02T13:02:29.685368462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\"" Mar 2 13:02:29.685981 kubelet[2583]: E0302 13:02:29.685944 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.686020 kubelet[2583]: W0302 13:02:29.685954 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.686020 kubelet[2583]: E0302 13:02:29.686003 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.690846 containerd[1464]: time="2026-03-02T13:02:29.690788945Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 13:02:29.706195 containerd[1464]: time="2026-03-02T13:02:29.706155239Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c\"" Mar 2 13:02:29.706847 containerd[1464]: time="2026-03-02T13:02:29.706736596Z" level=info msg="StartContainer for \"09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c\"" Mar 2 13:02:29.719650 kubelet[2583]: E0302 13:02:29.719617 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.719650 kubelet[2583]: W0302 13:02:29.719639 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.719797 kubelet[2583]: E0302 13:02:29.719659 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.720171 kubelet[2583]: E0302 13:02:29.720154 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.720377 kubelet[2583]: W0302 13:02:29.720251 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.720377 kubelet[2583]: E0302 13:02:29.720273 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.720887 kubelet[2583]: E0302 13:02:29.720862 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.720887 kubelet[2583]: W0302 13:02:29.720886 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.720949 kubelet[2583]: E0302 13:02:29.720900 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.721336 kubelet[2583]: E0302 13:02:29.721289 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.721336 kubelet[2583]: W0302 13:02:29.721323 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.721410 kubelet[2583]: E0302 13:02:29.721339 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.721799 kubelet[2583]: E0302 13:02:29.721778 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.721799 kubelet[2583]: W0302 13:02:29.721798 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.721867 kubelet[2583]: E0302 13:02:29.721808 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.722247 kubelet[2583]: E0302 13:02:29.722223 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.722247 kubelet[2583]: W0302 13:02:29.722247 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.722305 kubelet[2583]: E0302 13:02:29.722260 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.722717 kubelet[2583]: E0302 13:02:29.722644 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.722717 kubelet[2583]: W0302 13:02:29.722669 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.722717 kubelet[2583]: E0302 13:02:29.722681 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.723337 kubelet[2583]: E0302 13:02:29.723303 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.723441 kubelet[2583]: W0302 13:02:29.723328 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.723441 kubelet[2583]: E0302 13:02:29.723400 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.724242 kubelet[2583]: E0302 13:02:29.724224 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.724463 kubelet[2583]: W0302 13:02:29.724322 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.724463 kubelet[2583]: E0302 13:02:29.724342 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.725037 kubelet[2583]: E0302 13:02:29.724926 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.725037 kubelet[2583]: W0302 13:02:29.724942 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.725037 kubelet[2583]: E0302 13:02:29.724992 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.725547 kubelet[2583]: E0302 13:02:29.725532 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.725793 kubelet[2583]: W0302 13:02:29.725671 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.725793 kubelet[2583]: E0302 13:02:29.725690 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.726344 kubelet[2583]: E0302 13:02:29.726200 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.726344 kubelet[2583]: W0302 13:02:29.726214 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.726344 kubelet[2583]: E0302 13:02:29.726229 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.727073 kubelet[2583]: E0302 13:02:29.726873 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.727073 kubelet[2583]: W0302 13:02:29.726892 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.727073 kubelet[2583]: E0302 13:02:29.726908 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.728054 kubelet[2583]: E0302 13:02:29.727855 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.728054 kubelet[2583]: W0302 13:02:29.727870 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.728054 kubelet[2583]: E0302 13:02:29.727883 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.728406 kubelet[2583]: E0302 13:02:29.728390 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.728504 kubelet[2583]: W0302 13:02:29.728470 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.728504 kubelet[2583]: E0302 13:02:29.728487 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.730311 kubelet[2583]: E0302 13:02:29.730127 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.730311 kubelet[2583]: W0302 13:02:29.730143 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.730311 kubelet[2583]: E0302 13:02:29.730160 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.731313 kubelet[2583]: E0302 13:02:29.731115 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.731313 kubelet[2583]: W0302 13:02:29.731130 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.731313 kubelet[2583]: E0302 13:02:29.731146 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.731767 kubelet[2583]: E0302 13:02:29.731511 2583 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:02:29.731767 kubelet[2583]: W0302 13:02:29.731523 2583 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:02:29.731767 kubelet[2583]: E0302 13:02:29.731536 2583 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:02:29.756790 systemd[1]: Started cri-containerd-09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c.scope - libcontainer container 09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c. Mar 2 13:02:29.822248 systemd[1]: cri-containerd-09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c.scope: Deactivated successfully. Mar 2 13:02:29.830939 containerd[1464]: time="2026-03-02T13:02:29.830812838Z" level=info msg="StartContainer for \"09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c\" returns successfully" Mar 2 13:02:29.866174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c-rootfs.mount: Deactivated successfully. Mar 2 13:02:29.909756 containerd[1464]: time="2026-03-02T13:02:29.909524676Z" level=info msg="shim disconnected" id=09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c namespace=k8s.io Mar 2 13:02:29.909756 containerd[1464]: time="2026-03-02T13:02:29.909737502Z" level=warning msg="cleaning up after shim disconnected" id=09aa2c22be72f686b33dbcf5443b316617b8b1cbe7cf67cecdbc8975c38c350c namespace=k8s.io Mar 2 13:02:29.909756 containerd[1464]: time="2026-03-02T13:02:29.909752319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:02:30.340013 kubelet[2583]: E0302 13:02:30.339898 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:30.604345 containerd[1464]: time="2026-03-02T13:02:30.604171319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 13:02:30.622625 kubelet[2583]: I0302 13:02:30.622465 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fd8bdddb9-5rmc4" podStartSLOduration=3.7776394460000002 podStartE2EDuration="9.622420376s" podCreationTimestamp="2026-03-02 13:02:21 +0000 UTC" firstStartedPulling="2026-03-02 13:02:22.533463736 +0000 UTC m=+27.413645263" lastFinishedPulling="2026-03-02 13:02:28.378244666 +0000 UTC m=+33.258426193" observedRunningTime="2026-03-02 13:02:28.617257961 +0000 UTC m=+33.497439509" watchObservedRunningTime="2026-03-02 13:02:30.622420376 +0000 UTC m=+35.502601903" Mar 2 13:02:32.339999 kubelet[2583]: E0302 13:02:32.339877 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:34.340079 kubelet[2583]: E0302 13:02:34.339916 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:34.719274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061384825.mount: Deactivated successfully. Mar 2 13:02:34.903524 containerd[1464]: time="2026-03-02T13:02:34.903390893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:34.904934 containerd[1464]: time="2026-03-02T13:02:34.904353330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=159483365" Mar 2 13:02:34.919397 containerd[1464]: time="2026-03-02T13:02:34.916472201Z" level=info msg="ImageCreate event name:\"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:34.926768 containerd[1464]: time="2026-03-02T13:02:34.926700046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:34.927498 containerd[1464]: time="2026-03-02T13:02:34.927443396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"159483227\" in 4.323206004s" Mar 2 13:02:34.927498 containerd[1464]: time="2026-03-02T13:02:34.927486896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\"" Mar 2 13:02:34.932854 containerd[1464]: time="2026-03-02T13:02:34.932801450Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 13:02:34.956070 containerd[1464]: time="2026-03-02T13:02:34.956005630Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49\"" Mar 2 13:02:34.956854 containerd[1464]: time="2026-03-02T13:02:34.956762293Z" level=info msg="StartContainer for \"036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49\"" Mar 2 13:02:35.034759 systemd[1]: Started cri-containerd-036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49.scope - libcontainer container 036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49. Mar 2 13:02:35.079208 containerd[1464]: time="2026-03-02T13:02:35.079026142Z" level=info msg="StartContainer for \"036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49\" returns successfully" Mar 2 13:02:35.145205 systemd[1]: cri-containerd-036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49.scope: Deactivated successfully. Mar 2 13:02:35.214116 containerd[1464]: time="2026-03-02T13:02:35.213239272Z" level=info msg="shim disconnected" id=036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49 namespace=k8s.io Mar 2 13:02:35.214116 containerd[1464]: time="2026-03-02T13:02:35.213439466Z" level=warning msg="cleaning up after shim disconnected" id=036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49 namespace=k8s.io Mar 2 13:02:35.214116 containerd[1464]: time="2026-03-02T13:02:35.213451147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:02:35.673418 containerd[1464]: time="2026-03-02T13:02:35.673336668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 13:02:35.718107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-036f56e63518322f58cb1060f877e5705d677f92d67f34376066ecbb325d9f49-rootfs.mount: Deactivated successfully. Mar 2 13:02:36.344220 kubelet[2583]: E0302 13:02:36.343433 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:37.519220 containerd[1464]: time="2026-03-02T13:02:37.519103599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:37.520253 containerd[1464]: time="2026-03-02T13:02:37.520029822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=70584418" Mar 2 13:02:37.521502 containerd[1464]: time="2026-03-02T13:02:37.521450390Z" level=info msg="ImageCreate event name:\"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:37.524032 containerd[1464]: time="2026-03-02T13:02:37.524000301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:37.525070 containerd[1464]: time="2026-03-02T13:02:37.524980867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"72140463\" in 1.851532251s" Mar 2 13:02:37.525070 containerd[1464]: time="2026-03-02T13:02:37.525027905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\"" Mar 2 13:02:37.530258 containerd[1464]: time="2026-03-02T13:02:37.530211533Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 13:02:37.587825 containerd[1464]: time="2026-03-02T13:02:37.587751852Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23\"" Mar 2 13:02:37.588283 containerd[1464]: time="2026-03-02T13:02:37.588236244Z" level=info msg="StartContainer for \"96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23\"" Mar 2 13:02:37.659754 systemd[1]: Started cri-containerd-96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23.scope - libcontainer container 96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23. Mar 2 13:02:37.697507 containerd[1464]: time="2026-03-02T13:02:37.697321331Z" level=info msg="StartContainer for \"96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23\" returns successfully" Mar 2 13:02:38.340196 kubelet[2583]: E0302 13:02:38.340098 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j7c7n" podUID="c416f0d5-dcd3-417c-ab45-85ca0752b4e9" Mar 2 13:02:38.970370 systemd[1]: cri-containerd-96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23.scope: Deactivated successfully. Mar 2 13:02:38.970840 systemd[1]: cri-containerd-96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23.scope: Consumed 1.413s CPU time. Mar 2 13:02:39.030079 kubelet[2583]: I0302 13:02:39.030037 2583 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 2 13:02:39.034993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23-rootfs.mount: Deactivated successfully. Mar 2 13:02:39.058557 containerd[1464]: time="2026-03-02T13:02:39.058181901Z" level=info msg="shim disconnected" id=96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23 namespace=k8s.io Mar 2 13:02:39.058557 containerd[1464]: time="2026-03-02T13:02:39.058271538Z" level=warning msg="cleaning up after shim disconnected" id=96675ef7d2feba2d4646fe99c1c082acaee66c0b7bdb72e95472093493f96a23 namespace=k8s.io Mar 2 13:02:39.058557 containerd[1464]: time="2026-03-02T13:02:39.058285704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:02:39.128818 systemd[1]: Created slice kubepods-burstable-podd871cd79_c660_4da5_b55c_eb8af06ee43e.slice - libcontainer container kubepods-burstable-podd871cd79_c660_4da5_b55c_eb8af06ee43e.slice. Mar 2 13:02:39.147259 systemd[1]: Created slice kubepods-burstable-podfcbd4ee9_3b15_40c9_804b_4103bd692429.slice - libcontainer container kubepods-burstable-podfcbd4ee9_3b15_40c9_804b_4103bd692429.slice. Mar 2 13:02:39.165385 systemd[1]: Created slice kubepods-besteffort-pod3499dad4_7c22_491a_9867_c5cb91b67e07.slice - libcontainer container kubepods-besteffort-pod3499dad4_7c22_491a_9867_c5cb91b67e07.slice. Mar 2 13:02:39.180302 systemd[1]: Created slice kubepods-besteffort-pod10ddd28d_60cc_4ef0_9da9_c597c406cf38.slice - libcontainer container kubepods-besteffort-pod10ddd28d_60cc_4ef0_9da9_c597c406cf38.slice. Mar 2 13:02:39.191749 kubelet[2583]: I0302 13:02:39.190965 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcbd4ee9-3b15-40c9-804b-4103bd692429-config-volume\") pod \"coredns-66bc5c9577-wk98t\" (UID: \"fcbd4ee9-3b15-40c9-804b-4103bd692429\") " pod="kube-system/coredns-66bc5c9577-wk98t" Mar 2 13:02:39.193815 kubelet[2583]: I0302 13:02:39.193777 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec9dc82b-9518-4a49-8110-199c802cf620-calico-apiserver-certs\") pod \"calico-apiserver-866b8f6974-dbn4b\" (UID: \"ec9dc82b-9518-4a49-8110-199c802cf620\") " pod="calico-system/calico-apiserver-866b8f6974-dbn4b" Mar 2 13:02:39.194293 kubelet[2583]: I0302 13:02:39.194269 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/62dbc2a5-2b7d-4fc7-af17-38dcb97940da-goldmane-key-pair\") pod \"goldmane-54d7f6b6d6-5zs8p\" (UID: \"62dbc2a5-2b7d-4fc7-af17-38dcb97940da\") " pod="calico-system/goldmane-54d7f6b6d6-5zs8p" Mar 2 13:02:39.195325 kubelet[2583]: I0302 13:02:39.194466 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62dbc2a5-2b7d-4fc7-af17-38dcb97940da-goldmane-ca-bundle\") pod \"goldmane-54d7f6b6d6-5zs8p\" (UID: \"62dbc2a5-2b7d-4fc7-af17-38dcb97940da\") " pod="calico-system/goldmane-54d7f6b6d6-5zs8p" Mar 2 13:02:39.195325 kubelet[2583]: I0302 13:02:39.194511 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbkl\" (UniqueName: \"kubernetes.io/projected/62dbc2a5-2b7d-4fc7-af17-38dcb97940da-kube-api-access-lwbkl\") pod \"goldmane-54d7f6b6d6-5zs8p\" (UID: \"62dbc2a5-2b7d-4fc7-af17-38dcb97940da\") " pod="calico-system/goldmane-54d7f6b6d6-5zs8p" Mar 2 13:02:39.195325 kubelet[2583]: I0302 13:02:39.194547 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8vm\" (UniqueName: \"kubernetes.io/projected/10ddd28d-60cc-4ef0-9da9-c597c406cf38-kube-api-access-tv8vm\") pod \"calico-kube-controllers-7f6b574974-gkgnz\" (UID: \"10ddd28d-60cc-4ef0-9da9-c597c406cf38\") " pod="calico-system/calico-kube-controllers-7f6b574974-gkgnz" Mar 2 13:02:39.199859 kubelet[2583]: I0302 13:02:39.197674 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srv9w\" (UniqueName: \"kubernetes.io/projected/949cbf83-187d-4070-b9b2-980988cef53d-kube-api-access-srv9w\") pod \"calico-apiserver-866b8f6974-h9zgx\" (UID: \"949cbf83-187d-4070-b9b2-980988cef53d\") " pod="calico-system/calico-apiserver-866b8f6974-h9zgx" Mar 2 13:02:39.199859 kubelet[2583]: I0302 13:02:39.197718 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62dbc2a5-2b7d-4fc7-af17-38dcb97940da-config\") pod \"goldmane-54d7f6b6d6-5zs8p\" (UID: \"62dbc2a5-2b7d-4fc7-af17-38dcb97940da\") " pod="calico-system/goldmane-54d7f6b6d6-5zs8p" Mar 2 13:02:39.199859 kubelet[2583]: I0302 13:02:39.197825 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-nginx-config\") pod \"whisker-7b789df455-9t8cg\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " pod="calico-system/whisker-7b789df455-9t8cg" Mar 2 13:02:39.199859 kubelet[2583]: I0302 13:02:39.197850 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-backend-key-pair\") pod \"whisker-7b789df455-9t8cg\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " pod="calico-system/whisker-7b789df455-9t8cg" Mar 2 13:02:39.199859 kubelet[2583]: I0302 13:02:39.197875 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-ca-bundle\") pod \"whisker-7b789df455-9t8cg\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " pod="calico-system/whisker-7b789df455-9t8cg" Mar 2 13:02:39.198136 systemd[1]: Created slice kubepods-besteffort-pod949cbf83_187d_4070_b9b2_980988cef53d.slice - libcontainer container kubepods-besteffort-pod949cbf83_187d_4070_b9b2_980988cef53d.slice. Mar 2 13:02:39.200281 kubelet[2583]: I0302 13:02:39.197919 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d871cd79-c660-4da5-b55c-eb8af06ee43e-config-volume\") pod \"coredns-66bc5c9577-wtgch\" (UID: \"d871cd79-c660-4da5-b55c-eb8af06ee43e\") " pod="kube-system/coredns-66bc5c9577-wtgch" Mar 2 13:02:39.200281 kubelet[2583]: I0302 13:02:39.198006 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzj2b\" (UniqueName: \"kubernetes.io/projected/fcbd4ee9-3b15-40c9-804b-4103bd692429-kube-api-access-pzj2b\") pod \"coredns-66bc5c9577-wk98t\" (UID: \"fcbd4ee9-3b15-40c9-804b-4103bd692429\") " pod="kube-system/coredns-66bc5c9577-wk98t" Mar 2 13:02:39.200281 kubelet[2583]: I0302 13:02:39.198023 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npc4b\" (UniqueName: \"kubernetes.io/projected/ec9dc82b-9518-4a49-8110-199c802cf620-kube-api-access-npc4b\") pod \"calico-apiserver-866b8f6974-dbn4b\" (UID: \"ec9dc82b-9518-4a49-8110-199c802cf620\") " pod="calico-system/calico-apiserver-866b8f6974-dbn4b" Mar 2 13:02:39.200281 kubelet[2583]: I0302 13:02:39.198072 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vx5x\" (UniqueName: \"kubernetes.io/projected/3499dad4-7c22-491a-9867-c5cb91b67e07-kube-api-access-7vx5x\") pod \"whisker-7b789df455-9t8cg\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " pod="calico-system/whisker-7b789df455-9t8cg" Mar 2 13:02:39.200281 kubelet[2583]: I0302 13:02:39.198107 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mwz\" (UniqueName: \"kubernetes.io/projected/d871cd79-c660-4da5-b55c-eb8af06ee43e-kube-api-access-52mwz\") pod \"coredns-66bc5c9577-wtgch\" (UID: \"d871cd79-c660-4da5-b55c-eb8af06ee43e\") " pod="kube-system/coredns-66bc5c9577-wtgch" Mar 2 13:02:39.200499 kubelet[2583]: I0302 13:02:39.198121 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10ddd28d-60cc-4ef0-9da9-c597c406cf38-tigera-ca-bundle\") pod \"calico-kube-controllers-7f6b574974-gkgnz\" (UID: \"10ddd28d-60cc-4ef0-9da9-c597c406cf38\") " pod="calico-system/calico-kube-controllers-7f6b574974-gkgnz" Mar 2 13:02:39.200499 kubelet[2583]: I0302 13:02:39.198193 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/949cbf83-187d-4070-b9b2-980988cef53d-calico-apiserver-certs\") pod \"calico-apiserver-866b8f6974-h9zgx\" (UID: \"949cbf83-187d-4070-b9b2-980988cef53d\") " pod="calico-system/calico-apiserver-866b8f6974-h9zgx" Mar 2 13:02:39.207438 systemd[1]: Created slice kubepods-besteffort-pod62dbc2a5_2b7d_4fc7_af17_38dcb97940da.slice - libcontainer container kubepods-besteffort-pod62dbc2a5_2b7d_4fc7_af17_38dcb97940da.slice. Mar 2 13:02:39.221293 systemd[1]: Created slice kubepods-besteffort-podec9dc82b_9518_4a49_8110_199c802cf620.slice - libcontainer container kubepods-besteffort-podec9dc82b_9518_4a49_8110_199c802cf620.slice. Mar 2 13:02:39.446334 kubelet[2583]: E0302 13:02:39.446201 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:39.447327 containerd[1464]: time="2026-03-02T13:02:39.447077397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wtgch,Uid:d871cd79-c660-4da5-b55c-eb8af06ee43e,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:39.461067 kubelet[2583]: E0302 13:02:39.461012 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:39.462068 containerd[1464]: time="2026-03-02T13:02:39.461612107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wk98t,Uid:fcbd4ee9-3b15-40c9-804b-4103bd692429,Namespace:kube-system,Attempt:0,}" Mar 2 13:02:39.477123 containerd[1464]: time="2026-03-02T13:02:39.476900098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b789df455-9t8cg,Uid:3499dad4-7c22-491a-9867-c5cb91b67e07,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:39.493050 containerd[1464]: time="2026-03-02T13:02:39.492719863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6b574974-gkgnz,Uid:10ddd28d-60cc-4ef0-9da9-c597c406cf38,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:39.521669 containerd[1464]: time="2026-03-02T13:02:39.521608351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-h9zgx,Uid:949cbf83-187d-4070-b9b2-980988cef53d,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:39.525537 containerd[1464]: time="2026-03-02T13:02:39.525167303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-5zs8p,Uid:62dbc2a5-2b7d-4fc7-af17-38dcb97940da,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:39.528418 containerd[1464]: time="2026-03-02T13:02:39.528286574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-dbn4b,Uid:ec9dc82b-9518-4a49-8110-199c802cf620,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:39.673169 containerd[1464]: time="2026-03-02T13:02:39.673128062Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 13:02:39.703863 containerd[1464]: time="2026-03-02T13:02:39.703647027Z" level=error msg="Failed to destroy network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.706874 containerd[1464]: time="2026-03-02T13:02:39.706831701Z" level=error msg="encountered an error cleaning up failed sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.707245 containerd[1464]: time="2026-03-02T13:02:39.707170512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wk98t,Uid:fcbd4ee9-3b15-40c9-804b-4103bd692429,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.727437 kubelet[2583]: E0302 13:02:39.727289 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.727630 kubelet[2583]: E0302 13:02:39.727453 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wk98t" Mar 2 13:02:39.727630 kubelet[2583]: E0302 13:02:39.727494 2583 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wk98t" Mar 2 13:02:39.727709 kubelet[2583]: E0302 13:02:39.727650 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wk98t_kube-system(fcbd4ee9-3b15-40c9-804b-4103bd692429)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wk98t_kube-system(fcbd4ee9-3b15-40c9-804b-4103bd692429)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wk98t" podUID="fcbd4ee9-3b15-40c9-804b-4103bd692429" Mar 2 13:02:39.741307 containerd[1464]: time="2026-03-02T13:02:39.741260139Z" level=error msg="Failed to destroy network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.743698 containerd[1464]: time="2026-03-02T13:02:39.743550803Z" level=error msg="encountered an error cleaning up failed sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.743778 containerd[1464]: time="2026-03-02T13:02:39.743732392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wtgch,Uid:d871cd79-c660-4da5-b55c-eb8af06ee43e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.744421 kubelet[2583]: E0302 13:02:39.744350 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.744519 kubelet[2583]: E0302 13:02:39.744437 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wtgch" Mar 2 13:02:39.744519 kubelet[2583]: E0302 13:02:39.744465 2583 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wtgch" Mar 2 13:02:39.744813 kubelet[2583]: E0302 13:02:39.744542 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wtgch_kube-system(d871cd79-c660-4da5-b55c-eb8af06ee43e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wtgch_kube-system(d871cd79-c660-4da5-b55c-eb8af06ee43e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wtgch" podUID="d871cd79-c660-4da5-b55c-eb8af06ee43e" Mar 2 13:02:39.771746 containerd[1464]: time="2026-03-02T13:02:39.771669787Z" level=info msg="CreateContainer within sandbox \"13220b63355f9f96ac9a4975e28a1e0a7d5365a53cb89f4b4e4aa0a164b984c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91ef8776e339a50fd85421bfc119e1948446873b9b60295a117ca31c5084c652\"" Mar 2 13:02:39.773400 containerd[1464]: time="2026-03-02T13:02:39.773256262Z" level=info msg="StartContainer for \"91ef8776e339a50fd85421bfc119e1948446873b9b60295a117ca31c5084c652\"" Mar 2 13:02:39.778777 containerd[1464]: time="2026-03-02T13:02:39.778727909Z" level=error msg="Failed to destroy network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.779297 containerd[1464]: time="2026-03-02T13:02:39.779238781Z" level=error msg="encountered an error cleaning up failed sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.779349 containerd[1464]: time="2026-03-02T13:02:39.779313590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-h9zgx,Uid:949cbf83-187d-4070-b9b2-980988cef53d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.779753 kubelet[2583]: E0302 13:02:39.779683 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.779814 kubelet[2583]: E0302 13:02:39.779770 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-866b8f6974-h9zgx" Mar 2 13:02:39.779814 kubelet[2583]: E0302 13:02:39.779797 2583 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-866b8f6974-h9zgx" Mar 2 13:02:39.779979 kubelet[2583]: E0302 13:02:39.779880 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866b8f6974-h9zgx_calico-system(949cbf83-187d-4070-b9b2-980988cef53d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866b8f6974-h9zgx_calico-system(949cbf83-187d-4070-b9b2-980988cef53d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-866b8f6974-h9zgx" podUID="949cbf83-187d-4070-b9b2-980988cef53d" Mar 2 13:02:39.781406 containerd[1464]: time="2026-03-02T13:02:39.781369142Z" level=error msg="Failed to destroy network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.782020 containerd[1464]: time="2026-03-02T13:02:39.781987354Z" level=error msg="encountered an error cleaning up failed sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.782143 containerd[1464]: time="2026-03-02T13:02:39.782118368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b789df455-9t8cg,Uid:3499dad4-7c22-491a-9867-c5cb91b67e07,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.782624 kubelet[2583]: E0302 13:02:39.782538 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.782748 kubelet[2583]: E0302 13:02:39.782729 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b789df455-9t8cg" Mar 2 13:02:39.782911 kubelet[2583]: E0302 13:02:39.782890 2583 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b789df455-9t8cg" Mar 2 13:02:39.783175 kubelet[2583]: E0302 13:02:39.783144 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b789df455-9t8cg_calico-system(3499dad4-7c22-491a-9867-c5cb91b67e07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b789df455-9t8cg_calico-system(3499dad4-7c22-491a-9867-c5cb91b67e07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b789df455-9t8cg" podUID="3499dad4-7c22-491a-9867-c5cb91b67e07" Mar 2 13:02:39.800650 containerd[1464]: time="2026-03-02T13:02:39.800505293Z" level=error msg="Failed to destroy network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.801158 containerd[1464]: time="2026-03-02T13:02:39.801077138Z" level=error msg="Failed to destroy network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.801158 containerd[1464]: time="2026-03-02T13:02:39.801106773Z" level=error msg="encountered an error cleaning up failed sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.801287 containerd[1464]: time="2026-03-02T13:02:39.801162117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-5zs8p,Uid:62dbc2a5-2b7d-4fc7-af17-38dcb97940da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.801500 kubelet[2583]: E0302 13:02:39.801396 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.801500 kubelet[2583]: E0302 13:02:39.801474 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d7f6b6d6-5zs8p" Mar 2 13:02:39.801500 kubelet[2583]: E0302 13:02:39.801499 2583 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d7f6b6d6-5zs8p" Mar 2 13:02:39.801733 kubelet[2583]: E0302 13:02:39.801553 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d7f6b6d6-5zs8p_calico-system(62dbc2a5-2b7d-4fc7-af17-38dcb97940da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d7f6b6d6-5zs8p_calico-system(62dbc2a5-2b7d-4fc7-af17-38dcb97940da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d7f6b6d6-5zs8p" podUID="62dbc2a5-2b7d-4fc7-af17-38dcb97940da" Mar 2 13:02:39.803776 containerd[1464]: time="2026-03-02T13:02:39.803690207Z" level=error msg="encountered an error cleaning up failed sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.803849 containerd[1464]: time="2026-03-02T13:02:39.803780095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-dbn4b,Uid:ec9dc82b-9518-4a49-8110-199c802cf620,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.804106 kubelet[2583]: E0302 13:02:39.804039 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.804158 kubelet[2583]: E0302 13:02:39.804110 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-866b8f6974-dbn4b" Mar 2 13:02:39.804158 kubelet[2583]: E0302 13:02:39.804134 2583 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-866b8f6974-dbn4b" Mar 2 13:02:39.804262 kubelet[2583]: E0302 13:02:39.804210 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866b8f6974-dbn4b_calico-system(ec9dc82b-9518-4a49-8110-199c802cf620)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866b8f6974-dbn4b_calico-system(ec9dc82b-9518-4a49-8110-199c802cf620)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-866b8f6974-dbn4b" podUID="ec9dc82b-9518-4a49-8110-199c802cf620" Mar 2 13:02:39.808628 containerd[1464]: time="2026-03-02T13:02:39.808504838Z" level=error msg="Failed to destroy network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.809218 containerd[1464]: time="2026-03-02T13:02:39.809124843Z" level=error msg="encountered an error cleaning up failed sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.809218 containerd[1464]: time="2026-03-02T13:02:39.809181288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6b574974-gkgnz,Uid:10ddd28d-60cc-4ef0-9da9-c597c406cf38,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.809757 kubelet[2583]: E0302 13:02:39.809647 2583 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:02:39.809757 kubelet[2583]: E0302 13:02:39.809716 2583 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6b574974-gkgnz" Mar 2 13:02:39.809757 kubelet[2583]: E0302 13:02:39.809739 2583 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6b574974-gkgnz" Mar 2 13:02:39.809926 kubelet[2583]: E0302 13:02:39.809787 2583 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f6b574974-gkgnz_calico-system(10ddd28d-60cc-4ef0-9da9-c597c406cf38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f6b574974-gkgnz_calico-system(10ddd28d-60cc-4ef0-9da9-c597c406cf38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f6b574974-gkgnz" podUID="10ddd28d-60cc-4ef0-9da9-c597c406cf38" Mar 2 13:02:39.843840 systemd[1]: Started cri-containerd-91ef8776e339a50fd85421bfc119e1948446873b9b60295a117ca31c5084c652.scope - libcontainer container 91ef8776e339a50fd85421bfc119e1948446873b9b60295a117ca31c5084c652. Mar 2 13:02:39.896507 containerd[1464]: time="2026-03-02T13:02:39.896402209Z" level=info msg="StartContainer for \"91ef8776e339a50fd85421bfc119e1948446873b9b60295a117ca31c5084c652\" returns successfully" Mar 2 13:02:40.347913 systemd[1]: Created slice kubepods-besteffort-podc416f0d5_dcd3_417c_ab45_85ca0752b4e9.slice - libcontainer container kubepods-besteffort-podc416f0d5_dcd3_417c_ab45_85ca0752b4e9.slice. Mar 2 13:02:40.359329 containerd[1464]: time="2026-03-02T13:02:40.359237215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7c7n,Uid:c416f0d5-dcd3-417c-ab45-85ca0752b4e9,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:40.529290 systemd-networkd[1389]: calieb737911770: Link UP Mar 2 13:02:40.529733 systemd-networkd[1389]: calieb737911770: Gained carrier Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.401 [ERROR][3748] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.432 [INFO][3748] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--j7c7n-eth0 csi-node-driver- calico-system c416f0d5-dcd3-417c-ab45-85ca0752b4e9 778 0 2026-03-02 13:02:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6db5596769 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-j7c7n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieb737911770 [] [] }} ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.432 [INFO][3748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-eth0" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.461 [INFO][3773] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" HandleID="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Workload="localhost-k8s-csi--node--driver--j7c7n-eth0" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.480 [INFO][3773] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" HandleID="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Workload="localhost-k8s-csi--node--driver--j7c7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-j7c7n", "timestamp":"2026-03-02 13:02:40.461124824 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00047f1e0)} Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.480 [INFO][3773] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.480 [INFO][3773] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.480 [INFO][3773] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.484 [INFO][3773] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.489 [INFO][3773] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.494 [INFO][3773] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.495 [INFO][3773] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.498 [INFO][3773] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.498 [INFO][3773] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.499 [INFO][3773] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71 Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.504 [INFO][3773] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.509 [INFO][3773] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.509 [INFO][3773] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" host="localhost" Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.509 [INFO][3773] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:40.548832 containerd[1464]: 2026-03-02 13:02:40.509 [INFO][3773] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" HandleID="k8s-pod-network.759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Workload="localhost-k8s-csi--node--driver--j7c7n-eth0" Mar 2 13:02:40.549484 containerd[1464]: 2026-03-02 13:02:40.514 [INFO][3748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j7c7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c416f0d5-dcd3-417c-ab45-85ca0752b4e9", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6db5596769", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-j7c7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb737911770", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:40.549484 containerd[1464]: 2026-03-02 13:02:40.515 [INFO][3748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-eth0" Mar 2 13:02:40.549484 containerd[1464]: 2026-03-02 13:02:40.515 [INFO][3748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb737911770 ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-eth0" Mar 2 13:02:40.549484 containerd[1464]: 2026-03-02 13:02:40.531 [INFO][3748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-eth0" Mar 2 13:02:40.549484 containerd[1464]: 2026-03-02 13:02:40.533 [INFO][3748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j7c7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c416f0d5-dcd3-417c-ab45-85ca0752b4e9", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6db5596769", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71", Pod:"csi-node-driver-j7c7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb737911770", MAC:"0e:64:79:17:b2:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:40.549484 containerd[1464]: 2026-03-02 13:02:40.543 [INFO][3748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71" Namespace="calico-system" Pod="csi-node-driver-j7c7n" WorkloadEndpoint="localhost-k8s-csi--node--driver--j7c7n-eth0" Mar 2 13:02:40.574695 containerd[1464]: time="2026-03-02T13:02:40.574434366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:40.576020 containerd[1464]: time="2026-03-02T13:02:40.575677777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:40.576020 containerd[1464]: time="2026-03-02T13:02:40.575709105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:40.576020 containerd[1464]: time="2026-03-02T13:02:40.575846452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:40.611558 systemd[1]: Started cri-containerd-759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71.scope - libcontainer container 759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71. Mar 2 13:02:40.636198 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:40.649925 containerd[1464]: time="2026-03-02T13:02:40.649855723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j7c7n,Uid:c416f0d5-dcd3-417c-ab45-85ca0752b4e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71\"" Mar 2 13:02:40.651866 containerd[1464]: time="2026-03-02T13:02:40.651374281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 13:02:40.657110 kubelet[2583]: I0302 13:02:40.657055 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:40.659174 kubelet[2583]: I0302 13:02:40.659137 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:40.663347 containerd[1464]: time="2026-03-02T13:02:40.662833747Z" level=info msg="StopPodSandbox for \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\"" Mar 2 13:02:40.663347 containerd[1464]: time="2026-03-02T13:02:40.662912541Z" level=info msg="StopPodSandbox for \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\"" Mar 2 13:02:40.664199 containerd[1464]: time="2026-03-02T13:02:40.664140960Z" level=info msg="Ensure that sandbox 8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c in task-service has been cleanup successfully" Mar 2 13:02:40.664482 containerd[1464]: time="2026-03-02T13:02:40.664416175Z" level=info msg="Ensure that sandbox f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7 in task-service has been cleanup successfully" Mar 2 13:02:40.667863 kubelet[2583]: I0302 13:02:40.667842 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:40.669208 containerd[1464]: time="2026-03-02T13:02:40.668977918Z" level=info msg="StopPodSandbox for \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\"" Mar 2 13:02:40.669475 containerd[1464]: time="2026-03-02T13:02:40.669365630Z" level=info msg="Ensure that sandbox 5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645 in task-service has been cleanup successfully" Mar 2 13:02:40.670800 kubelet[2583]: I0302 13:02:40.670692 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:40.671489 containerd[1464]: time="2026-03-02T13:02:40.671466490Z" level=info msg="StopPodSandbox for \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\"" Mar 2 13:02:40.675435 containerd[1464]: time="2026-03-02T13:02:40.675190028Z" level=info msg="Ensure that sandbox c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7 in task-service has been cleanup successfully" Mar 2 13:02:40.680927 kubelet[2583]: I0302 13:02:40.680903 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:40.684732 containerd[1464]: time="2026-03-02T13:02:40.682688942Z" level=info msg="StopPodSandbox for \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\"" Mar 2 13:02:40.684732 containerd[1464]: time="2026-03-02T13:02:40.682834092Z" level=info msg="Ensure that sandbox ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189 in task-service has been cleanup successfully" Mar 2 13:02:40.687168 kubelet[2583]: I0302 13:02:40.686808 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2zj94" podStartSLOduration=3.7286367990000002 podStartE2EDuration="18.686788394s" podCreationTimestamp="2026-03-02 13:02:22 +0000 UTC" firstStartedPulling="2026-03-02 13:02:22.567743462 +0000 UTC m=+27.447924989" lastFinishedPulling="2026-03-02 13:02:37.525895057 +0000 UTC m=+42.406076584" observedRunningTime="2026-03-02 13:02:40.68609899 +0000 UTC m=+45.566280557" watchObservedRunningTime="2026-03-02 13:02:40.686788394 +0000 UTC m=+45.566969921" Mar 2 13:02:40.695414 kubelet[2583]: I0302 13:02:40.695320 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:40.697296 containerd[1464]: time="2026-03-02T13:02:40.697079305Z" level=info msg="StopPodSandbox for \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\"" Mar 2 13:02:40.698471 containerd[1464]: time="2026-03-02T13:02:40.698382975Z" level=info msg="Ensure that sandbox 928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08 in task-service has been cleanup successfully" Mar 2 13:02:40.706680 kubelet[2583]: I0302 13:02:40.706508 2583 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:40.710237 containerd[1464]: time="2026-03-02T13:02:40.709867064Z" level=info msg="StopPodSandbox for \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\"" Mar 2 13:02:40.710237 containerd[1464]: time="2026-03-02T13:02:40.710084338Z" level=info msg="Ensure that sandbox b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634 in task-service has been cleanup successfully" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.870 [INFO][3962] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.871 [INFO][3962] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" iface="eth0" netns="/var/run/netns/cni-8ecc30dd-ade2-f6a9-2659-8ca966b9ee23" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.873 [INFO][3962] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" iface="eth0" netns="/var/run/netns/cni-8ecc30dd-ade2-f6a9-2659-8ca966b9ee23" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.882 [INFO][3962] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" iface="eth0" netns="/var/run/netns/cni-8ecc30dd-ade2-f6a9-2659-8ca966b9ee23" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.882 [INFO][3962] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.882 [INFO][3962] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.920 [INFO][3992] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.921 [INFO][3992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.921 [INFO][3992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.934 [WARNING][3992] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.934 [INFO][3992] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.942 [INFO][3992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:40.951073 containerd[1464]: 2026-03-02 13:02:40.946 [INFO][3962] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:40.953954 containerd[1464]: time="2026-03-02T13:02:40.953826846Z" level=info msg="TearDown network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\" successfully" Mar 2 13:02:40.953954 containerd[1464]: time="2026-03-02T13:02:40.953859416Z" level=info msg="StopPodSandbox for \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\" returns successfully" Mar 2 13:02:40.957922 kubelet[2583]: E0302 13:02:40.957840 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:40.960195 containerd[1464]: time="2026-03-02T13:02:40.960154253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wk98t,Uid:fcbd4ee9-3b15-40c9-804b-4103bd692429,Namespace:kube-system,Attempt:1,}" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.812 [INFO][3881] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.812 [INFO][3881] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" iface="eth0" netns="/var/run/netns/cni-9729dec9-0798-f9a3-d79d-9ab5f37e0345" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.814 [INFO][3881] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" iface="eth0" netns="/var/run/netns/cni-9729dec9-0798-f9a3-d79d-9ab5f37e0345" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.817 [INFO][3881] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" iface="eth0" netns="/var/run/netns/cni-9729dec9-0798-f9a3-d79d-9ab5f37e0345" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.817 [INFO][3881] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.817 [INFO][3881] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.940 [INFO][3979] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.943 [INFO][3979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.943 [INFO][3979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.964 [WARNING][3979] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.964 [INFO][3979] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.971 [INFO][3979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:40.986244 containerd[1464]: 2026-03-02 13:02:40.977 [INFO][3881] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:40.991968 containerd[1464]: time="2026-03-02T13:02:40.991888358Z" level=info msg="TearDown network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\" successfully" Mar 2 13:02:40.992308 containerd[1464]: time="2026-03-02T13:02:40.992178989Z" level=info msg="StopPodSandbox for \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\" returns successfully" Mar 2 13:02:41.012922 kubelet[2583]: I0302 13:02:41.011200 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:02:41.012922 kubelet[2583]: E0302 13:02:41.011908 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:41.037736 systemd[1]: run-containerd-runc-k8s.io-91ef8776e339a50fd85421bfc119e1948446873b9b60295a117ca31c5084c652-runc.ZeSoIU.mount: Deactivated successfully. Mar 2 13:02:41.039554 systemd[1]: run-netns-cni\x2d9729dec9\x2d0798\x2df9a3\x2dd79d\x2d9ab5f37e0345.mount: Deactivated successfully. Mar 2 13:02:41.040157 systemd[1]: run-netns-cni\x2d8ecc30dd\x2dade2\x2df6a9\x2d2659\x2d8ca966b9ee23.mount: Deactivated successfully. Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:40.907 [INFO][3885] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:40.908 [INFO][3885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" iface="eth0" netns="/var/run/netns/cni-93d858f0-3523-4ee6-61da-c3bac25d2fd2" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:40.908 [INFO][3885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" iface="eth0" netns="/var/run/netns/cni-93d858f0-3523-4ee6-61da-c3bac25d2fd2" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:40.909 [INFO][3885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" iface="eth0" netns="/var/run/netns/cni-93d858f0-3523-4ee6-61da-c3bac25d2fd2" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:40.914 [INFO][3885] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:40.914 [INFO][3885] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:40.996 [INFO][4000] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:41.001 [INFO][4000] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:41.001 [INFO][4000] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:41.048 [WARNING][4000] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:41.048 [INFO][4000] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:41.052 [INFO][4000] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.067966 containerd[1464]: 2026-03-02 13:02:41.062 [INFO][3885] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:41.071977 containerd[1464]: time="2026-03-02T13:02:41.070760407Z" level=info msg="TearDown network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\" successfully" Mar 2 13:02:41.071977 containerd[1464]: time="2026-03-02T13:02:41.070802195Z" level=info msg="StopPodSandbox for \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\" returns successfully" Mar 2 13:02:41.071188 systemd[1]: run-netns-cni\x2d93d858f0\x2d3523\x2d4ee6\x2d61da\x2dc3bac25d2fd2.mount: Deactivated successfully. Mar 2 13:02:41.111239 containerd[1464]: time="2026-03-02T13:02:41.110990894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-5zs8p,Uid:62dbc2a5-2b7d-4fc7-af17-38dcb97940da,Namespace:calico-system,Attempt:1,}" Mar 2 13:02:41.124791 kubelet[2583]: I0302 13:02:41.124716 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-nginx-config\") pod \"3499dad4-7c22-491a-9867-c5cb91b67e07\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " Mar 2 13:02:41.124791 kubelet[2583]: I0302 13:02:41.124800 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-backend-key-pair\") pod \"3499dad4-7c22-491a-9867-c5cb91b67e07\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " Mar 2 13:02:41.124791 kubelet[2583]: I0302 13:02:41.124822 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vx5x\" (UniqueName: \"kubernetes.io/projected/3499dad4-7c22-491a-9867-c5cb91b67e07-kube-api-access-7vx5x\") pod \"3499dad4-7c22-491a-9867-c5cb91b67e07\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " Mar 2 13:02:41.124791 kubelet[2583]: I0302 13:02:41.124842 2583 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-ca-bundle\") pod \"3499dad4-7c22-491a-9867-c5cb91b67e07\" (UID: \"3499dad4-7c22-491a-9867-c5cb91b67e07\") " Mar 2 13:02:41.127350 kubelet[2583]: I0302 13:02:41.125704 2583 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3499dad4-7c22-491a-9867-c5cb91b67e07" (UID: "3499dad4-7c22-491a-9867-c5cb91b67e07"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:02:41.127350 kubelet[2583]: I0302 13:02:41.126189 2583 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3499dad4-7c22-491a-9867-c5cb91b67e07" (UID: "3499dad4-7c22-491a-9867-c5cb91b67e07"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:02:41.127350 kubelet[2583]: I0302 13:02:41.126859 2583 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 2 13:02:41.127350 kubelet[2583]: I0302 13:02:41.126900 2583 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3499dad4-7c22-491a-9867-c5cb91b67e07-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 2 13:02:41.136764 kubelet[2583]: I0302 13:02:41.135804 2583 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3499dad4-7c22-491a-9867-c5cb91b67e07-kube-api-access-7vx5x" (OuterVolumeSpecName: "kube-api-access-7vx5x") pod "3499dad4-7c22-491a-9867-c5cb91b67e07" (UID: "3499dad4-7c22-491a-9867-c5cb91b67e07"). InnerVolumeSpecName "kube-api-access-7vx5x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:02:41.137453 systemd[1]: var-lib-kubelet-pods-3499dad4\x2d7c22\x2d491a\x2d9867\x2dc5cb91b67e07-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 13:02:41.139670 kubelet[2583]: I0302 13:02:41.138864 2583 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3499dad4-7c22-491a-9867-c5cb91b67e07" (UID: "3499dad4-7c22-491a-9867-c5cb91b67e07"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:02:41.142674 systemd[1]: var-lib-kubelet-pods-3499dad4\x2d7c22\x2d491a\x2d9867\x2dc5cb91b67e07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7vx5x.mount: Deactivated successfully. Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:40.937 [INFO][3945] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:40.941 [INFO][3945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" iface="eth0" netns="/var/run/netns/cni-7044ecd2-a26a-daea-2571-432dab43d462" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:40.942 [INFO][3945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" iface="eth0" netns="/var/run/netns/cni-7044ecd2-a26a-daea-2571-432dab43d462" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:40.944 [INFO][3945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" iface="eth0" netns="/var/run/netns/cni-7044ecd2-a26a-daea-2571-432dab43d462" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:40.945 [INFO][3945] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:40.945 [INFO][3945] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:41.053 [INFO][4017] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:41.053 [INFO][4017] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:41.053 [INFO][4017] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:41.144 [WARNING][4017] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:41.144 [INFO][4017] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:41.151 [INFO][4017] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.164773 containerd[1464]: 2026-03-02 13:02:41.162 [INFO][3945] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:41.165825 containerd[1464]: time="2026-03-02T13:02:41.165187021Z" level=info msg="TearDown network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\" successfully" Mar 2 13:02:41.165825 containerd[1464]: time="2026-03-02T13:02:41.165633792Z" level=info msg="StopPodSandbox for \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\" returns successfully" Mar 2 13:02:41.170919 containerd[1464]: time="2026-03-02T13:02:41.170891635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6b574974-gkgnz,Uid:10ddd28d-60cc-4ef0-9da9-c597c406cf38,Namespace:calico-system,Attempt:1,}" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:40.940 [INFO][3924] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:40.944 [INFO][3924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" iface="eth0" netns="/var/run/netns/cni-7a1ad8ba-a548-72b9-90c3-f1a0f0332230" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:40.946 [INFO][3924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" iface="eth0" netns="/var/run/netns/cni-7a1ad8ba-a548-72b9-90c3-f1a0f0332230" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:40.950 [INFO][3924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" iface="eth0" netns="/var/run/netns/cni-7a1ad8ba-a548-72b9-90c3-f1a0f0332230" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:40.951 [INFO][3924] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:40.951 [INFO][3924] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:41.038 [INFO][4018] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:41.038 [INFO][4018] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:41.150 [INFO][4018] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:41.171 [WARNING][4018] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:41.171 [INFO][4018] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:41.183 [INFO][4018] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.211163 containerd[1464]: 2026-03-02 13:02:41.200 [INFO][3924] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:41.214115 containerd[1464]: time="2026-03-02T13:02:41.212990823Z" level=info msg="TearDown network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\" successfully" Mar 2 13:02:41.214115 containerd[1464]: time="2026-03-02T13:02:41.213023523Z" level=info msg="StopPodSandbox for \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\" returns successfully" Mar 2 13:02:41.222669 kubelet[2583]: E0302 13:02:41.220893 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:40.915 [INFO][3883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:40.916 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" iface="eth0" netns="/var/run/netns/cni-d4c2b87b-6a7e-1b3b-736d-5dbc6a51c4d1" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:40.916 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" iface="eth0" netns="/var/run/netns/cni-d4c2b87b-6a7e-1b3b-736d-5dbc6a51c4d1" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:40.922 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" iface="eth0" netns="/var/run/netns/cni-d4c2b87b-6a7e-1b3b-736d-5dbc6a51c4d1" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:40.922 [INFO][3883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:40.922 [INFO][3883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:41.044 [INFO][4005] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:41.045 [INFO][4005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:41.185 [INFO][4005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:41.202 [WARNING][4005] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:41.202 [INFO][4005] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:41.208 [INFO][4005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.230089 containerd[1464]: 2026-03-02 13:02:41.217 [INFO][3883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:41.230510 containerd[1464]: time="2026-03-02T13:02:41.230232928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wtgch,Uid:d871cd79-c660-4da5-b55c-eb8af06ee43e,Namespace:kube-system,Attempt:1,}" Mar 2 13:02:41.230537 kubelet[2583]: I0302 13:02:41.229199 2583 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3499dad4-7c22-491a-9867-c5cb91b67e07-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 2 13:02:41.230537 kubelet[2583]: I0302 13:02:41.230039 2583 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7vx5x\" (UniqueName: \"kubernetes.io/projected/3499dad4-7c22-491a-9867-c5cb91b67e07-kube-api-access-7vx5x\") on node \"localhost\" DevicePath \"\"" Mar 2 13:02:41.231783 containerd[1464]: time="2026-03-02T13:02:41.231715891Z" level=info msg="TearDown network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\" successfully" Mar 2 13:02:41.231783 containerd[1464]: time="2026-03-02T13:02:41.231762859Z" level=info msg="StopPodSandbox for \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\" returns successfully" Mar 2 13:02:41.235450 containerd[1464]: time="2026-03-02T13:02:41.235349653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-dbn4b,Uid:ec9dc82b-9518-4a49-8110-199c802cf620,Namespace:calico-system,Attempt:1,}" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:40.980 [INFO][3918] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:40.980 [INFO][3918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" iface="eth0" netns="/var/run/netns/cni-03688457-8673-cd81-1cb3-330d1a8535a8" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:40.981 [INFO][3918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" iface="eth0" netns="/var/run/netns/cni-03688457-8673-cd81-1cb3-330d1a8535a8" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:40.981 [INFO][3918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" iface="eth0" netns="/var/run/netns/cni-03688457-8673-cd81-1cb3-330d1a8535a8" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:40.982 [INFO][3918] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:40.982 [INFO][3918] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:41.085 [INFO][4029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:41.085 [INFO][4029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:41.207 [INFO][4029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:41.237 [WARNING][4029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:41.238 [INFO][4029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:41.241 [INFO][4029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.253726 containerd[1464]: 2026-03-02 13:02:41.248 [INFO][3918] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:41.254137 containerd[1464]: time="2026-03-02T13:02:41.254054792Z" level=info msg="TearDown network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\" successfully" Mar 2 13:02:41.254137 containerd[1464]: time="2026-03-02T13:02:41.254079198Z" level=info msg="StopPodSandbox for \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\" returns successfully" Mar 2 13:02:41.257378 containerd[1464]: time="2026-03-02T13:02:41.257251555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-h9zgx,Uid:949cbf83-187d-4070-b9b2-980988cef53d,Namespace:calico-system,Attempt:1,}" Mar 2 13:02:41.354803 systemd[1]: Removed slice kubepods-besteffort-pod3499dad4_7c22_491a_9867_c5cb91b67e07.slice - libcontainer container kubepods-besteffort-pod3499dad4_7c22_491a_9867_c5cb91b67e07.slice. Mar 2 13:02:41.356255 systemd-networkd[1389]: calic24078ab4b5: Link UP Mar 2 13:02:41.359789 systemd-networkd[1389]: calic24078ab4b5: Gained carrier Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.079 [ERROR][4035] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.155 [INFO][4035] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wk98t-eth0 coredns-66bc5c9577- kube-system fcbd4ee9-3b15-40c9-804b-4103bd692429 960 0 2026-03-02 13:02:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wk98t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic24078ab4b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.155 [INFO][4035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.245 [INFO][4073] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" HandleID="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.258 [INFO][4073] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" HandleID="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003306f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wk98t", "timestamp":"2026-03-02 13:02:41.245807882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00071c6e0)} Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.258 [INFO][4073] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.258 [INFO][4073] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.258 [INFO][4073] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.266 [INFO][4073] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.279 [INFO][4073] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.289 [INFO][4073] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.293 [INFO][4073] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.296 [INFO][4073] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.296 [INFO][4073] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.298 [INFO][4073] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8 Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.303 [INFO][4073] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.325 [INFO][4073] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.325 [INFO][4073] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" host="localhost" Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.325 [INFO][4073] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.388282 containerd[1464]: 2026-03-02 13:02:41.325 [INFO][4073] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" HandleID="k8s-pod-network.694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:41.390063 containerd[1464]: 2026-03-02 13:02:41.340 [INFO][4035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wk98t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fcbd4ee9-3b15-40c9-804b-4103bd692429", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wk98t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic24078ab4b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:41.390063 containerd[1464]: 2026-03-02 13:02:41.343 [INFO][4035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:41.390063 containerd[1464]: 2026-03-02 13:02:41.343 [INFO][4035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic24078ab4b5 ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:41.390063 containerd[1464]: 2026-03-02 13:02:41.359 [INFO][4035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:41.390063 containerd[1464]: 2026-03-02 13:02:41.362 [INFO][4035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wk98t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fcbd4ee9-3b15-40c9-804b-4103bd692429", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8", Pod:"coredns-66bc5c9577-wk98t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic24078ab4b5", MAC:"46:e2:ef:b6:76:54", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:41.390063 containerd[1464]: 2026-03-02 13:02:41.381 [INFO][4035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8" Namespace="kube-system" Pod="coredns-66bc5c9577-wk98t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:41.567532 containerd[1464]: time="2026-03-02T13:02:41.567293255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:41.567532 containerd[1464]: time="2026-03-02T13:02:41.567396317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:41.567532 containerd[1464]: time="2026-03-02T13:02:41.567408580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:41.569399 containerd[1464]: time="2026-03-02T13:02:41.569072821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:41.628064 systemd-networkd[1389]: cali96a03c6b168: Link UP Mar 2 13:02:41.630186 systemd-networkd[1389]: cali96a03c6b168: Gained carrier Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.250 [ERROR][4060] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.268 [INFO][4060] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0 goldmane-54d7f6b6d6- calico-system 62dbc2a5-2b7d-4fc7-af17-38dcb97940da 962 0 2026-03-02 13:02:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d7f6b6d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d7f6b6d6-5zs8p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali96a03c6b168 [] [] }} ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.270 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.371 [INFO][4099] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" HandleID="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.401 [INFO][4099] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" HandleID="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d6130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d7f6b6d6-5zs8p", "timestamp":"2026-03-02 13:02:41.371772932 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000746000)} Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.402 [INFO][4099] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.402 [INFO][4099] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.402 [INFO][4099] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.418 [INFO][4099] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.441 [INFO][4099] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.452 [INFO][4099] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.459 [INFO][4099] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.482 [INFO][4099] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.482 [INFO][4099] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.507 [INFO][4099] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6 Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.548 [INFO][4099] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.563 [INFO][4099] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.564 [INFO][4099] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" host="localhost" Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.566 [INFO][4099] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.695642 containerd[1464]: 2026-03-02 13:02:41.572 [INFO][4099] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" HandleID="k8s-pod-network.06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.697107 containerd[1464]: 2026-03-02 13:02:41.607 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"62dbc2a5-2b7d-4fc7-af17-38dcb97940da", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d7f6b6d6-5zs8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a03c6b168", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:41.697107 containerd[1464]: 2026-03-02 13:02:41.613 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.697107 containerd[1464]: 2026-03-02 13:02:41.614 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96a03c6b168 ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.697107 containerd[1464]: 2026-03-02 13:02:41.628 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.697107 containerd[1464]: 2026-03-02 13:02:41.629 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"62dbc2a5-2b7d-4fc7-af17-38dcb97940da", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6", Pod:"goldmane-54d7f6b6d6-5zs8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a03c6b168", MAC:"de:ef:22:43:65:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:41.697107 containerd[1464]: 2026-03-02 13:02:41.658 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6" Namespace="calico-system" Pod="goldmane-54d7f6b6d6-5zs8p" WorkloadEndpoint="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:41.709752 systemd[1]: Started cri-containerd-694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8.scope - libcontainer container 694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8. Mar 2 13:02:41.729184 kubelet[2583]: E0302 13:02:41.729120 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:41.762410 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:41.807358 systemd-networkd[1389]: cali3d5c75bbfed: Link UP Mar 2 13:02:41.807638 systemd-networkd[1389]: cali3d5c75bbfed: Gained carrier Mar 2 13:02:41.881506 systemd[1]: Created slice kubepods-besteffort-pod8775e8a4_cdc7_4c32_8a26_49f4713db763.slice - libcontainer container kubepods-besteffort-pod8775e8a4_cdc7_4c32_8a26_49f4713db763.slice. Mar 2 13:02:41.945022 kubelet[2583]: I0302 13:02:41.944910 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8775e8a4-cdc7-4c32-8a26-49f4713db763-whisker-ca-bundle\") pod \"whisker-65f85fcb9c-p677g\" (UID: \"8775e8a4-cdc7-4c32-8a26-49f4713db763\") " pod="calico-system/whisker-65f85fcb9c-p677g" Mar 2 13:02:41.945022 kubelet[2583]: I0302 13:02:41.945002 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8775e8a4-cdc7-4c32-8a26-49f4713db763-whisker-backend-key-pair\") pod \"whisker-65f85fcb9c-p677g\" (UID: \"8775e8a4-cdc7-4c32-8a26-49f4713db763\") " pod="calico-system/whisker-65f85fcb9c-p677g" Mar 2 13:02:41.945022 kubelet[2583]: I0302 13:02:41.945023 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8775e8a4-cdc7-4c32-8a26-49f4713db763-nginx-config\") pod \"whisker-65f85fcb9c-p677g\" (UID: \"8775e8a4-cdc7-4c32-8a26-49f4713db763\") " pod="calico-system/whisker-65f85fcb9c-p677g" Mar 2 13:02:41.945250 kubelet[2583]: I0302 13:02:41.945044 2583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwx2s\" (UniqueName: \"kubernetes.io/projected/8775e8a4-cdc7-4c32-8a26-49f4713db763-kube-api-access-jwx2s\") pod \"whisker-65f85fcb9c-p677g\" (UID: \"8775e8a4-cdc7-4c32-8a26-49f4713db763\") " pod="calico-system/whisker-65f85fcb9c-p677g" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.264 [ERROR][4078] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.278 [INFO][4078] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0 calico-kube-controllers-7f6b574974- calico-system 10ddd28d-60cc-4ef0-9da9-c597c406cf38 963 0 2026-03-02 13:02:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f6b574974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f6b574974-gkgnz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3d5c75bbfed [] [] }} ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.279 [INFO][4078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.426 [INFO][4106] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" HandleID="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.446 [INFO][4106] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" HandleID="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f6b574974-gkgnz", "timestamp":"2026-03-02 13:02:41.426207418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff4a0)} Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.446 [INFO][4106] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.571 [INFO][4106] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.572 [INFO][4106] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.597 [INFO][4106] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.635 [INFO][4106] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.655 [INFO][4106] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.669 [INFO][4106] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.682 [INFO][4106] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.682 [INFO][4106] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.690 [INFO][4106] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245 Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.748 [INFO][4106] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.765 [INFO][4106] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.767 [INFO][4106] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" host="localhost" Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.768 [INFO][4106] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:41.957163 containerd[1464]: 2026-03-02 13:02:41.769 [INFO][4106] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" HandleID="k8s-pod-network.4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.958005 containerd[1464]: 2026-03-02 13:02:41.796 [INFO][4078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0", GenerateName:"calico-kube-controllers-7f6b574974-", Namespace:"calico-system", SelfLink:"", UID:"10ddd28d-60cc-4ef0-9da9-c597c406cf38", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6b574974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f6b574974-gkgnz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d5c75bbfed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:41.958005 containerd[1464]: 2026-03-02 13:02:41.801 [INFO][4078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.958005 containerd[1464]: 2026-03-02 13:02:41.801 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d5c75bbfed ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.958005 containerd[1464]: 2026-03-02 13:02:41.809 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.958005 containerd[1464]: 2026-03-02 13:02:41.831 [INFO][4078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0", GenerateName:"calico-kube-controllers-7f6b574974-", Namespace:"calico-system", SelfLink:"", UID:"10ddd28d-60cc-4ef0-9da9-c597c406cf38", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6b574974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245", Pod:"calico-kube-controllers-7f6b574974-gkgnz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d5c75bbfed", MAC:"4e:55:a6:fd:21:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:41.958005 containerd[1464]: 2026-03-02 13:02:41.894 [INFO][4078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245" Namespace="calico-system" Pod="calico-kube-controllers-7f6b574974-gkgnz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:41.964923 containerd[1464]: time="2026-03-02T13:02:41.964553708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wk98t,Uid:fcbd4ee9-3b15-40c9-804b-4103bd692429,Namespace:kube-system,Attempt:1,} returns sandbox id \"694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8\"" Mar 2 13:02:41.967471 kubelet[2583]: E0302 13:02:41.966850 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:41.976449 containerd[1464]: time="2026-03-02T13:02:41.972497905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:41.976449 containerd[1464]: time="2026-03-02T13:02:41.972619161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:41.976449 containerd[1464]: time="2026-03-02T13:02:41.972635572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:41.976449 containerd[1464]: time="2026-03-02T13:02:41.973789732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:41.981276 systemd-networkd[1389]: calicd024e21758: Link UP Mar 2 13:02:41.982654 systemd-networkd[1389]: calicd024e21758: Gained carrier Mar 2 13:02:41.985927 containerd[1464]: time="2026-03-02T13:02:41.985895829Z" level=info msg="CreateContainer within sandbox \"694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:02:42.056373 systemd[1]: run-netns-cni\x2dd4c2b87b\x2d6a7e\x2d1b3b\x2d736d\x2d5dbc6a51c4d1.mount: Deactivated successfully. Mar 2 13:02:42.057012 systemd[1]: run-netns-cni\x2d7044ecd2\x2da26a\x2ddaea\x2d2571\x2d432dab43d462.mount: Deactivated successfully. Mar 2 13:02:42.057211 systemd[1]: run-netns-cni\x2d03688457\x2d8673\x2dcd81\x2d1cb3\x2d330d1a8535a8.mount: Deactivated successfully. Mar 2 13:02:42.057364 systemd[1]: run-netns-cni\x2d7a1ad8ba\x2da548\x2d72b9\x2d90c3\x2df1a0f0332230.mount: Deactivated successfully. Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.359 [ERROR][4107] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.386 [INFO][4107] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0 calico-apiserver-866b8f6974- calico-system ec9dc82b-9518-4a49-8110-199c802cf620 961 0 2026-03-02 13:02:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866b8f6974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-866b8f6974-dbn4b eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calicd024e21758 [] [] }} ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.386 [INFO][4107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.523 [INFO][4166] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" HandleID="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.541 [INFO][4166] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" HandleID="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-866b8f6974-dbn4b", "timestamp":"2026-03-02 13:02:41.523368852 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000313080)} Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.542 [INFO][4166] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.769 [INFO][4166] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.771 [INFO][4166] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.782 [INFO][4166] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.794 [INFO][4166] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.856 [INFO][4166] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.872 [INFO][4166] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.880 [INFO][4166] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.880 [INFO][4166] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.894 [INFO][4166] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82 Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.936 [INFO][4166] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.949 [INFO][4166] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.950 [INFO][4166] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" host="localhost" Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.950 [INFO][4166] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:42.073633 containerd[1464]: 2026-03-02 13:02:41.951 [INFO][4166] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" HandleID="k8s-pod-network.3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:42.074884 containerd[1464]: 2026-03-02 13:02:41.971 [INFO][4107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"ec9dc82b-9518-4a49-8110-199c802cf620", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-866b8f6974-dbn4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicd024e21758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.074884 containerd[1464]: 2026-03-02 13:02:41.971 [INFO][4107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:42.074884 containerd[1464]: 2026-03-02 13:02:41.971 [INFO][4107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd024e21758 ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:42.074884 containerd[1464]: 2026-03-02 13:02:41.984 [INFO][4107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:42.074884 containerd[1464]: 2026-03-02 13:02:41.985 [INFO][4107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"ec9dc82b-9518-4a49-8110-199c802cf620", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82", Pod:"calico-apiserver-866b8f6974-dbn4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicd024e21758", MAC:"d2:ef:7d:1d:6e:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.074884 containerd[1464]: 2026-03-02 13:02:42.044 [INFO][4107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-dbn4b" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:42.089487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967161291.mount: Deactivated successfully. Mar 2 13:02:42.106807 systemd[1]: Started cri-containerd-06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6.scope - libcontainer container 06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6. Mar 2 13:02:42.110083 containerd[1464]: time="2026-03-02T13:02:42.104537894Z" level=info msg="CreateContainer within sandbox \"694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"016a110ca099762e55203bb86c9aba6eea8dd09a9315ce4a8ed9c8fdfea70cc5\"" Mar 2 13:02:42.114870 containerd[1464]: time="2026-03-02T13:02:42.112209018Z" level=info msg="StartContainer for \"016a110ca099762e55203bb86c9aba6eea8dd09a9315ce4a8ed9c8fdfea70cc5\"" Mar 2 13:02:42.140920 containerd[1464]: time="2026-03-02T13:02:42.140535268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:42.140920 containerd[1464]: time="2026-03-02T13:02:42.140683383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:42.140920 containerd[1464]: time="2026-03-02T13:02:42.140699574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.141786 containerd[1464]: time="2026-03-02T13:02:42.141635438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.155543 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:42.185022 containerd[1464]: time="2026-03-02T13:02:42.184464909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:42.185022 containerd[1464]: time="2026-03-02T13:02:42.184526113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:42.185022 containerd[1464]: time="2026-03-02T13:02:42.184545739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.185022 containerd[1464]: time="2026-03-02T13:02:42.184735533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.187289 systemd-networkd[1389]: calieb737911770: Gained IPv6LL Mar 2 13:02:42.202237 containerd[1464]: time="2026-03-02T13:02:42.200760746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f85fcb9c-p677g,Uid:8775e8a4-cdc7-4c32-8a26-49f4713db763,Namespace:calico-system,Attempt:0,}" Mar 2 13:02:42.212834 systemd[1]: Started cri-containerd-4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245.scope - libcontainer container 4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245. Mar 2 13:02:42.263504 containerd[1464]: time="2026-03-02T13:02:42.263458259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d7f6b6d6-5zs8p,Uid:62dbc2a5-2b7d-4fc7-af17-38dcb97940da,Namespace:calico-system,Attempt:1,} returns sandbox id \"06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6\"" Mar 2 13:02:42.277822 systemd[1]: Started cri-containerd-016a110ca099762e55203bb86c9aba6eea8dd09a9315ce4a8ed9c8fdfea70cc5.scope - libcontainer container 016a110ca099762e55203bb86c9aba6eea8dd09a9315ce4a8ed9c8fdfea70cc5. Mar 2 13:02:42.279737 systemd[1]: Started cri-containerd-3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82.scope - libcontainer container 3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82. Mar 2 13:02:42.296162 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:42.342509 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:42.363378 systemd-networkd[1389]: calib58ac6f9865: Link UP Mar 2 13:02:42.369374 systemd-networkd[1389]: calib58ac6f9865: Gained carrier Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.429 [ERROR][4124] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.461 [INFO][4124] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wtgch-eth0 coredns-66bc5c9577- kube-system d871cd79-c660-4da5-b55c-eb8af06ee43e 965 0 2026-03-02 13:02:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wtgch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib58ac6f9865 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.461 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.669 [INFO][4213] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" HandleID="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.685 [INFO][4213] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" HandleID="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000befe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wtgch", "timestamp":"2026-03-02 13:02:41.669204036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000191600)} Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.686 [INFO][4213] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.958 [INFO][4213] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.958 [INFO][4213] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:41.971 [INFO][4213] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.092 [INFO][4213] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.155 [INFO][4213] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.187 [INFO][4213] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.238 [INFO][4213] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.240 [INFO][4213] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.253 [INFO][4213] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289 Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.276 [INFO][4213] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.328 [INFO][4213] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.329 [INFO][4213] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" host="localhost" Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.329 [INFO][4213] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:42.409860 containerd[1464]: 2026-03-02 13:02:42.329 [INFO][4213] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" HandleID="k8s-pod-network.4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:42.429356 containerd[1464]: 2026-03-02 13:02:42.347 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wtgch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d871cd79-c660-4da5-b55c-eb8af06ee43e", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wtgch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib58ac6f9865", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.429356 containerd[1464]: 2026-03-02 13:02:42.347 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:42.429356 containerd[1464]: 2026-03-02 13:02:42.347 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib58ac6f9865 ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:42.429356 containerd[1464]: 2026-03-02 13:02:42.371 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:42.429356 containerd[1464]: 2026-03-02 13:02:42.383 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wtgch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d871cd79-c660-4da5-b55c-eb8af06ee43e", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289", Pod:"coredns-66bc5c9577-wtgch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib58ac6f9865", MAC:"d6:6b:e2:1f:7a:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.429356 containerd[1464]: 2026-03-02 13:02:42.399 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289" Namespace="kube-system" Pod="coredns-66bc5c9577-wtgch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:42.430166 containerd[1464]: time="2026-03-02T13:02:42.429155946Z" level=info msg="StartContainer for \"016a110ca099762e55203bb86c9aba6eea8dd09a9315ce4a8ed9c8fdfea70cc5\" returns successfully" Mar 2 13:02:42.468617 containerd[1464]: time="2026-03-02T13:02:42.468364196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6b574974-gkgnz,Uid:10ddd28d-60cc-4ef0-9da9-c597c406cf38,Namespace:calico-system,Attempt:1,} returns sandbox id \"4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245\"" Mar 2 13:02:42.487026 systemd-networkd[1389]: cali3317be63288: Link UP Mar 2 13:02:42.490474 systemd-networkd[1389]: cali3317be63288: Gained carrier Mar 2 13:02:42.510169 containerd[1464]: time="2026-03-02T13:02:42.509807907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:42.512462 containerd[1464]: time="2026-03-02T13:02:42.510061240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:42.512462 containerd[1464]: time="2026-03-02T13:02:42.510084833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.512462 containerd[1464]: time="2026-03-02T13:02:42.511228695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:41.404 [ERROR][4133] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:41.447 [INFO][4133] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0 calico-apiserver-866b8f6974- calico-system 949cbf83-187d-4070-b9b2-980988cef53d 967 0 2026-03-02 13:02:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866b8f6974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-866b8f6974-h9zgx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3317be63288 [] [] }} ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:41.448 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:41.837 [INFO][4215] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" HandleID="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:41.860 [INFO][4215] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" HandleID="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-866b8f6974-h9zgx", "timestamp":"2026-03-02 13:02:41.837254981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000381ce0)} Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:41.860 [INFO][4215] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.329 [INFO][4215] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.330 [INFO][4215] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.337 [INFO][4215] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.359 [INFO][4215] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.378 [INFO][4215] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.385 [INFO][4215] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.394 [INFO][4215] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.394 [INFO][4215] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.403 [INFO][4215] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.442 [INFO][4215] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.457 [INFO][4215] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.459 [INFO][4215] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" host="localhost" Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.459 [INFO][4215] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:42.567334 containerd[1464]: 2026-03-02 13:02:42.459 [INFO][4215] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" HandleID="k8s-pod-network.8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:42.568027 containerd[1464]: 2026-03-02 13:02:42.470 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"949cbf83-187d-4070-b9b2-980988cef53d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-866b8f6974-h9zgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3317be63288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.568027 containerd[1464]: 2026-03-02 13:02:42.471 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:42.568027 containerd[1464]: 2026-03-02 13:02:42.472 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3317be63288 ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:42.568027 containerd[1464]: 2026-03-02 13:02:42.505 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:42.568027 containerd[1464]: 2026-03-02 13:02:42.506 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"949cbf83-187d-4070-b9b2-980988cef53d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac", Pod:"calico-apiserver-866b8f6974-h9zgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3317be63288", MAC:"3a:a5:16:79:e8:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.568027 containerd[1464]: 2026-03-02 13:02:42.546 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac" Namespace="calico-system" Pod="calico-apiserver-866b8f6974-h9zgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:42.570710 kernel: calico-node[4271]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 13:02:42.579837 systemd[1]: Started cri-containerd-4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289.scope - libcontainer container 4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289. Mar 2 13:02:42.594297 containerd[1464]: time="2026-03-02T13:02:42.594250986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-dbn4b,Uid:ec9dc82b-9518-4a49-8110-199c802cf620,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82\"" Mar 2 13:02:42.651079 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:42.652616 containerd[1464]: time="2026-03-02T13:02:42.650415310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:42.662614 containerd[1464]: time="2026-03-02T13:02:42.650616505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:42.662614 containerd[1464]: time="2026-03-02T13:02:42.650644477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.662614 containerd[1464]: time="2026-03-02T13:02:42.650782954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:42.731833 systemd[1]: Started cri-containerd-8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac.scope - libcontainer container 8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac. Mar 2 13:02:42.762214 kubelet[2583]: E0302 13:02:42.762178 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:42.794966 containerd[1464]: time="2026-03-02T13:02:42.794870781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:42.795904 containerd[1464]: time="2026-03-02T13:02:42.795827014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8793087" Mar 2 13:02:42.803146 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:42.816653 containerd[1464]: time="2026-03-02T13:02:42.816387588Z" level=info msg="ImageCreate event name:\"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:42.839206 kubelet[2583]: I0302 13:02:42.838385 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wk98t" podStartSLOduration=41.838361573 podStartE2EDuration="41.838361573s" podCreationTimestamp="2026-03-02 13:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:02:42.791295233 +0000 UTC m=+47.671476761" watchObservedRunningTime="2026-03-02 13:02:42.838361573 +0000 UTC m=+47.718543100" Mar 2 13:02:42.842325 containerd[1464]: time="2026-03-02T13:02:42.841799209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:42.845204 containerd[1464]: time="2026-03-02T13:02:42.845144162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"10349132\" in 2.193735647s" Mar 2 13:02:42.845375 containerd[1464]: time="2026-03-02T13:02:42.845319780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\"" Mar 2 13:02:42.846846 containerd[1464]: time="2026-03-02T13:02:42.846822199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wtgch,Uid:d871cd79-c660-4da5-b55c-eb8af06ee43e,Namespace:kube-system,Attempt:1,} returns sandbox id \"4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289\"" Mar 2 13:02:42.849111 kubelet[2583]: E0302 13:02:42.848966 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:42.851124 containerd[1464]: time="2026-03-02T13:02:42.851099901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 13:02:42.866508 containerd[1464]: time="2026-03-02T13:02:42.865197392Z" level=info msg="CreateContainer within sandbox \"759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 13:02:42.870864 containerd[1464]: time="2026-03-02T13:02:42.870818407Z" level=info msg="CreateContainer within sandbox \"4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:02:42.893041 systemd-networkd[1389]: calibf8801f5f01: Link UP Mar 2 13:02:42.894862 systemd-networkd[1389]: calibf8801f5f01: Gained carrier Mar 2 13:02:42.927328 containerd[1464]: time="2026-03-02T13:02:42.927188204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b8f6974-h9zgx,Uid:949cbf83-187d-4070-b9b2-980988cef53d,Namespace:calico-system,Attempt:1,} returns sandbox id \"8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac\"" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.480 [INFO][4514] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--65f85fcb9c--p677g-eth0 whisker-65f85fcb9c- calico-system 8775e8a4-cdc7-4c32-8a26-49f4713db763 998 0 2026-03-02 13:02:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65f85fcb9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-65f85fcb9c-p677g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibf8801f5f01 [] [] }} ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.481 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.647 [INFO][4609] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" HandleID="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Workload="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.680 [INFO][4609] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" HandleID="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Workload="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050beb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-65f85fcb9c-p677g", "timestamp":"2026-03-02 13:02:42.647031167 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005cb1e0)} Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.680 [INFO][4609] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.680 [INFO][4609] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.680 [INFO][4609] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.686 [INFO][4609] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.726 [INFO][4609] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.777 [INFO][4609] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.781 [INFO][4609] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.794 [INFO][4609] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.800 [INFO][4609] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.808 [INFO][4609] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.834 [INFO][4609] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.873 [INFO][4609] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.874 [INFO][4609] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" host="localhost" Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.874 [INFO][4609] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:42.940073 containerd[1464]: 2026-03-02 13:02:42.874 [INFO][4609] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" HandleID="k8s-pod-network.232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Workload="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" Mar 2 13:02:42.940719 containerd[1464]: 2026-03-02 13:02:42.888 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65f85fcb9c--p677g-eth0", GenerateName:"whisker-65f85fcb9c-", Namespace:"calico-system", SelfLink:"", UID:"8775e8a4-cdc7-4c32-8a26-49f4713db763", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f85fcb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-65f85fcb9c-p677g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf8801f5f01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.940719 containerd[1464]: 2026-03-02 13:02:42.888 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" Mar 2 13:02:42.940719 containerd[1464]: 2026-03-02 13:02:42.888 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf8801f5f01 ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" Mar 2 13:02:42.940719 containerd[1464]: 2026-03-02 13:02:42.894 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" Mar 2 13:02:42.940719 containerd[1464]: 2026-03-02 13:02:42.895 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65f85fcb9c--p677g-eth0", GenerateName:"whisker-65f85fcb9c-", Namespace:"calico-system", SelfLink:"", UID:"8775e8a4-cdc7-4c32-8a26-49f4713db763", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65f85fcb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb", Pod:"whisker-65f85fcb9c-p677g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf8801f5f01", MAC:"2a:64:db:c9:05:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:42.940719 containerd[1464]: 2026-03-02 13:02:42.930 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb" Namespace="calico-system" Pod="whisker-65f85fcb9c-p677g" WorkloadEndpoint="localhost-k8s-whisker--65f85fcb9c--p677g-eth0" Mar 2 13:02:42.943638 containerd[1464]: time="2026-03-02T13:02:42.943068464Z" level=info msg="CreateContainer within sandbox \"759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"01e3415b4970e8d09b7386142eac07957b2aac5d93251666e7a138580ca65ef6\"" Mar 2 13:02:42.946628 containerd[1464]: time="2026-03-02T13:02:42.944909211Z" level=info msg="CreateContainer within sandbox \"4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eaee7f31cce1ebe8288d720a8636db146f6084198ee8be53500a096aa54b7053\"" Mar 2 13:02:42.946628 containerd[1464]: time="2026-03-02T13:02:42.945712679Z" level=info msg="StartContainer for \"01e3415b4970e8d09b7386142eac07957b2aac5d93251666e7a138580ca65ef6\"" Mar 2 13:02:42.947058 containerd[1464]: time="2026-03-02T13:02:42.946964812Z" level=info msg="StartContainer for \"eaee7f31cce1ebe8288d720a8636db146f6084198ee8be53500a096aa54b7053\"" Mar 2 13:02:43.009793 containerd[1464]: time="2026-03-02T13:02:43.009395679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:02:43.009793 containerd[1464]: time="2026-03-02T13:02:43.009497810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:02:43.009793 containerd[1464]: time="2026-03-02T13:02:43.009552081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:43.013078 containerd[1464]: time="2026-03-02T13:02:43.009824599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:02:43.018909 systemd-networkd[1389]: calic24078ab4b5: Gained IPv6LL Mar 2 13:02:43.019494 systemd-networkd[1389]: cali96a03c6b168: Gained IPv6LL Mar 2 13:02:43.095259 systemd[1]: Started cri-containerd-01e3415b4970e8d09b7386142eac07957b2aac5d93251666e7a138580ca65ef6.scope - libcontainer container 01e3415b4970e8d09b7386142eac07957b2aac5d93251666e7a138580ca65ef6. Mar 2 13:02:43.104057 systemd[1]: Started cri-containerd-232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb.scope - libcontainer container 232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb. Mar 2 13:02:43.119222 systemd[1]: Started cri-containerd-eaee7f31cce1ebe8288d720a8636db146f6084198ee8be53500a096aa54b7053.scope - libcontainer container eaee7f31cce1ebe8288d720a8636db146f6084198ee8be53500a096aa54b7053. Mar 2 13:02:43.176238 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:02:43.182039 containerd[1464]: time="2026-03-02T13:02:43.181807473Z" level=info msg="StartContainer for \"eaee7f31cce1ebe8288d720a8636db146f6084198ee8be53500a096aa54b7053\" returns successfully" Mar 2 13:02:43.182417 containerd[1464]: time="2026-03-02T13:02:43.182139230Z" level=info msg="StartContainer for \"01e3415b4970e8d09b7386142eac07957b2aac5d93251666e7a138580ca65ef6\" returns successfully" Mar 2 13:02:43.295488 containerd[1464]: time="2026-03-02T13:02:43.295327461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65f85fcb9c-p677g,Uid:8775e8a4-cdc7-4c32-8a26-49f4713db763,Namespace:calico-system,Attempt:0,} returns sandbox id \"232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb\"" Mar 2 13:02:43.343832 kubelet[2583]: I0302 13:02:43.343712 2583 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3499dad4-7c22-491a-9867-c5cb91b67e07" path="/var/lib/kubelet/pods/3499dad4-7c22-491a-9867-c5cb91b67e07/volumes" Mar 2 13:02:43.402898 systemd-networkd[1389]: cali3d5c75bbfed: Gained IPv6LL Mar 2 13:02:43.620211 systemd-networkd[1389]: vxlan.calico: Link UP Mar 2 13:02:43.620222 systemd-networkd[1389]: vxlan.calico: Gained carrier Mar 2 13:02:43.787537 systemd-networkd[1389]: cali3317be63288: Gained IPv6LL Mar 2 13:02:43.795812 kubelet[2583]: E0302 13:02:43.795718 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:43.847671 kubelet[2583]: E0302 13:02:43.846472 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:43.869054 kubelet[2583]: I0302 13:02:43.868547 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wtgch" podStartSLOduration=42.868521376 podStartE2EDuration="42.868521376s" podCreationTimestamp="2026-03-02 13:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:02:43.837775638 +0000 UTC m=+48.717957185" watchObservedRunningTime="2026-03-02 13:02:43.868521376 +0000 UTC m=+48.748702903" Mar 2 13:02:43.919163 systemd-networkd[1389]: calicd024e21758: Gained IPv6LL Mar 2 13:02:43.978871 systemd-networkd[1389]: calib58ac6f9865: Gained IPv6LL Mar 2 13:02:44.457870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460240664.mount: Deactivated successfully. Mar 2 13:02:44.803719 containerd[1464]: time="2026-03-02T13:02:44.803461647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:44.804614 containerd[1464]: time="2026-03-02T13:02:44.804520053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=55607954" Mar 2 13:02:44.806495 containerd[1464]: time="2026-03-02T13:02:44.806411699Z" level=info msg="ImageCreate event name:\"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:44.809014 containerd[1464]: time="2026-03-02T13:02:44.808963755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:44.810343 containerd[1464]: time="2026-03-02T13:02:44.810234906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"55607800\" in 1.959043524s" Mar 2 13:02:44.810343 containerd[1464]: time="2026-03-02T13:02:44.810331847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\"" Mar 2 13:02:44.813526 containerd[1464]: time="2026-03-02T13:02:44.813213677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 13:02:44.819682 containerd[1464]: time="2026-03-02T13:02:44.819622339Z" level=info msg="CreateContainer within sandbox \"06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 13:02:44.840664 containerd[1464]: time="2026-03-02T13:02:44.840538652Z" level=info msg="CreateContainer within sandbox \"06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"eeb40cb1df16e9265a5f54518863efda1cea60c691a9de7c8cec7be8d062dd3e\"" Mar 2 13:02:44.841506 containerd[1464]: time="2026-03-02T13:02:44.841458185Z" level=info msg="StartContainer for \"eeb40cb1df16e9265a5f54518863efda1cea60c691a9de7c8cec7be8d062dd3e\"" Mar 2 13:02:44.853246 kubelet[2583]: E0302 13:02:44.852615 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:44.853791 kubelet[2583]: E0302 13:02:44.852918 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:44.874867 systemd-networkd[1389]: calibf8801f5f01: Gained IPv6LL Mar 2 13:02:44.898903 systemd[1]: Started cri-containerd-eeb40cb1df16e9265a5f54518863efda1cea60c691a9de7c8cec7be8d062dd3e.scope - libcontainer container eeb40cb1df16e9265a5f54518863efda1cea60c691a9de7c8cec7be8d062dd3e. Mar 2 13:02:44.976210 containerd[1464]: time="2026-03-02T13:02:44.976100919Z" level=info msg="StartContainer for \"eeb40cb1df16e9265a5f54518863efda1cea60c691a9de7c8cec7be8d062dd3e\" returns successfully" Mar 2 13:02:45.195827 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Mar 2 13:02:45.858693 kubelet[2583]: E0302 13:02:45.858221 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:45.874610 kubelet[2583]: I0302 13:02:45.874477 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d7f6b6d6-5zs8p" podStartSLOduration=22.327010971 podStartE2EDuration="24.874459951s" podCreationTimestamp="2026-03-02 13:02:21 +0000 UTC" firstStartedPulling="2026-03-02 13:02:42.26561116 +0000 UTC m=+47.145792687" lastFinishedPulling="2026-03-02 13:02:44.81306014 +0000 UTC m=+49.693241667" observedRunningTime="2026-03-02 13:02:45.872557005 +0000 UTC m=+50.752738533" watchObservedRunningTime="2026-03-02 13:02:45.874459951 +0000 UTC m=+50.754641488" Mar 2 13:02:46.279683 containerd[1464]: time="2026-03-02T13:02:46.279497137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:46.280547 containerd[1464]: time="2026-03-02T13:02:46.280498524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=52396348" Mar 2 13:02:46.282027 containerd[1464]: time="2026-03-02T13:02:46.281980595Z" level=info msg="ImageCreate event name:\"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:46.285303 containerd[1464]: time="2026-03-02T13:02:46.285227627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:46.286024 containerd[1464]: time="2026-03-02T13:02:46.285959973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"53952361\" in 1.472705119s" Mar 2 13:02:46.286024 containerd[1464]: time="2026-03-02T13:02:46.286009084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\"" Mar 2 13:02:46.287613 containerd[1464]: time="2026-03-02T13:02:46.287499452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:02:46.304546 containerd[1464]: time="2026-03-02T13:02:46.304480291Z" level=info msg="CreateContainer within sandbox \"4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 13:02:46.326669 containerd[1464]: time="2026-03-02T13:02:46.326615451Z" level=info msg="CreateContainer within sandbox \"4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05fdbd6fac82c5e6855a7d3903ab65bc95915c9ca5f3075b2157f548b582d73d\"" Mar 2 13:02:46.328230 containerd[1464]: time="2026-03-02T13:02:46.328127804Z" level=info msg="StartContainer for \"05fdbd6fac82c5e6855a7d3903ab65bc95915c9ca5f3075b2157f548b582d73d\"" Mar 2 13:02:46.377895 systemd[1]: Started cri-containerd-05fdbd6fac82c5e6855a7d3903ab65bc95915c9ca5f3075b2157f548b582d73d.scope - libcontainer container 05fdbd6fac82c5e6855a7d3903ab65bc95915c9ca5f3075b2157f548b582d73d. Mar 2 13:02:46.435878 containerd[1464]: time="2026-03-02T13:02:46.435823863Z" level=info msg="StartContainer for \"05fdbd6fac82c5e6855a7d3903ab65bc95915c9ca5f3075b2157f548b582d73d\" returns successfully" Mar 2 13:02:46.923665 kubelet[2583]: I0302 13:02:46.921097 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f6b574974-gkgnz" podStartSLOduration=21.10580759 podStartE2EDuration="24.920998792s" podCreationTimestamp="2026-03-02 13:02:22 +0000 UTC" firstStartedPulling="2026-03-02 13:02:42.472207518 +0000 UTC m=+47.352389045" lastFinishedPulling="2026-03-02 13:02:46.287398719 +0000 UTC m=+51.167580247" observedRunningTime="2026-03-02 13:02:46.919093471 +0000 UTC m=+51.799275018" watchObservedRunningTime="2026-03-02 13:02:46.920998792 +0000 UTC m=+51.801180319" Mar 2 13:02:48.029700 containerd[1464]: time="2026-03-02T13:02:48.029550363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:48.031096 containerd[1464]: time="2026-03-02T13:02:48.031037507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=48403149" Mar 2 13:02:48.032781 containerd[1464]: time="2026-03-02T13:02:48.032714831Z" level=info msg="ImageCreate event name:\"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:48.035676 containerd[1464]: time="2026-03-02T13:02:48.035613918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:48.037867 containerd[1464]: time="2026-03-02T13:02:48.036908907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 1.749368338s" Mar 2 13:02:48.037867 containerd[1464]: time="2026-03-02T13:02:48.037002362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 13:02:48.039161 containerd[1464]: time="2026-03-02T13:02:48.039091045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:02:48.045462 containerd[1464]: time="2026-03-02T13:02:48.045403849Z" level=info msg="CreateContainer within sandbox \"3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:02:48.069202 containerd[1464]: time="2026-03-02T13:02:48.069061431Z" level=info msg="CreateContainer within sandbox \"3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2796333ac4729a4a09ba7d8598d340886fe83b57561aac7cdb48a20f1aaeef57\"" Mar 2 13:02:48.072072 containerd[1464]: time="2026-03-02T13:02:48.070524277Z" level=info msg="StartContainer for \"2796333ac4729a4a09ba7d8598d340886fe83b57561aac7cdb48a20f1aaeef57\"" Mar 2 13:02:48.237314 systemd[1]: Started cri-containerd-2796333ac4729a4a09ba7d8598d340886fe83b57561aac7cdb48a20f1aaeef57.scope - libcontainer container 2796333ac4729a4a09ba7d8598d340886fe83b57561aac7cdb48a20f1aaeef57. Mar 2 13:02:48.307093 containerd[1464]: time="2026-03-02T13:02:48.306883943Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:48.308985 containerd[1464]: time="2026-03-02T13:02:48.308900924Z" level=info msg="StartContainer for \"2796333ac4729a4a09ba7d8598d340886fe83b57561aac7cdb48a20f1aaeef57\" returns successfully" Mar 2 13:02:48.310712 containerd[1464]: time="2026-03-02T13:02:48.309366381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 13:02:48.312373 containerd[1464]: time="2026-03-02T13:02:48.312060257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 272.906776ms" Mar 2 13:02:48.312373 containerd[1464]: time="2026-03-02T13:02:48.312092488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 13:02:48.315213 containerd[1464]: time="2026-03-02T13:02:48.315107858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 13:02:48.322295 containerd[1464]: time="2026-03-02T13:02:48.322242247Z" level=info msg="CreateContainer within sandbox \"8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:02:48.340484 containerd[1464]: time="2026-03-02T13:02:48.340320415Z" level=info msg="CreateContainer within sandbox \"8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"79dbd96c64153f323512be4821cc06d52f564328f8c779ce44694f2e2069e22d\"" Mar 2 13:02:48.341112 containerd[1464]: time="2026-03-02T13:02:48.341063335Z" level=info msg="StartContainer for \"79dbd96c64153f323512be4821cc06d52f564328f8c779ce44694f2e2069e22d\"" Mar 2 13:02:48.389737 systemd[1]: Started cri-containerd-79dbd96c64153f323512be4821cc06d52f564328f8c779ce44694f2e2069e22d.scope - libcontainer container 79dbd96c64153f323512be4821cc06d52f564328f8c779ce44694f2e2069e22d. Mar 2 13:02:48.463246 containerd[1464]: time="2026-03-02T13:02:48.463170765Z" level=info msg="StartContainer for \"79dbd96c64153f323512be4821cc06d52f564328f8c779ce44694f2e2069e22d\" returns successfully" Mar 2 13:02:48.975769 kubelet[2583]: I0302 13:02:48.975690 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-866b8f6974-h9zgx" podStartSLOduration=22.596918409 podStartE2EDuration="27.975671734s" podCreationTimestamp="2026-03-02 13:02:21 +0000 UTC" firstStartedPulling="2026-03-02 13:02:42.935726314 +0000 UTC m=+47.815907851" lastFinishedPulling="2026-03-02 13:02:48.31447965 +0000 UTC m=+53.194661176" observedRunningTime="2026-03-02 13:02:48.975305212 +0000 UTC m=+53.855486749" watchObservedRunningTime="2026-03-02 13:02:48.975671734 +0000 UTC m=+53.855853260" Mar 2 13:02:48.976605 kubelet[2583]: I0302 13:02:48.976081 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-866b8f6974-dbn4b" podStartSLOduration=22.533973195 podStartE2EDuration="27.976074344s" podCreationTimestamp="2026-03-02 13:02:21 +0000 UTC" firstStartedPulling="2026-03-02 13:02:42.596660195 +0000 UTC m=+47.476841722" lastFinishedPulling="2026-03-02 13:02:48.038761344 +0000 UTC m=+52.918942871" observedRunningTime="2026-03-02 13:02:48.956175833 +0000 UTC m=+53.836357360" watchObservedRunningTime="2026-03-02 13:02:48.976074344 +0000 UTC m=+53.856255872" Mar 2 13:02:49.951285 kubelet[2583]: I0302 13:02:49.951240 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:02:50.984155 containerd[1464]: time="2026-03-02T13:02:50.984077860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:50.985067 containerd[1464]: time="2026-03-02T13:02:50.984995964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=14702266" Mar 2 13:02:50.986298 containerd[1464]: time="2026-03-02T13:02:50.986241966Z" level=info msg="ImageCreate event name:\"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:50.989832 containerd[1464]: time="2026-03-02T13:02:50.989791401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:50.990657 containerd[1464]: time="2026-03-02T13:02:50.990522557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"16258263\" in 2.675347803s" Mar 2 13:02:50.990657 containerd[1464]: time="2026-03-02T13:02:50.990600943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\"" Mar 2 13:02:50.991646 containerd[1464]: time="2026-03-02T13:02:50.991610449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 13:02:50.996474 containerd[1464]: time="2026-03-02T13:02:50.996358150Z" level=info msg="CreateContainer within sandbox \"759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 13:02:51.093411 containerd[1464]: time="2026-03-02T13:02:51.093316216Z" level=info msg="CreateContainer within sandbox \"759f9eff6ca4083e7c5cb84001eb5bb19f02c28d0994f132b8edd62b86f04a71\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5172db4d00704bf69cf315a9921d5b3417a8cd3c6c92e4d878be97a807cbaba9\"" Mar 2 13:02:51.094266 containerd[1464]: time="2026-03-02T13:02:51.094227125Z" level=info msg="StartContainer for \"5172db4d00704bf69cf315a9921d5b3417a8cd3c6c92e4d878be97a807cbaba9\"" Mar 2 13:02:51.182859 systemd[1]: Started cri-containerd-5172db4d00704bf69cf315a9921d5b3417a8cd3c6c92e4d878be97a807cbaba9.scope - libcontainer container 5172db4d00704bf69cf315a9921d5b3417a8cd3c6c92e4d878be97a807cbaba9. Mar 2 13:02:51.229542 containerd[1464]: time="2026-03-02T13:02:51.228813964Z" level=info msg="StartContainer for \"5172db4d00704bf69cf315a9921d5b3417a8cd3c6c92e4d878be97a807cbaba9\" returns successfully" Mar 2 13:02:51.673389 containerd[1464]: time="2026-03-02T13:02:51.673287586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:51.674234 containerd[1464]: time="2026-03-02T13:02:51.674165153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=6036825" Mar 2 13:02:51.677645 containerd[1464]: time="2026-03-02T13:02:51.677542227Z" level=info msg="ImageCreate event name:\"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:51.680885 containerd[1464]: time="2026-03-02T13:02:51.680796198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:51.681863 containerd[1464]: time="2026-03-02T13:02:51.681803731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7592862\" in 690.141495ms" Mar 2 13:02:51.681863 containerd[1464]: time="2026-03-02T13:02:51.681845169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\"" Mar 2 13:02:51.690168 containerd[1464]: time="2026-03-02T13:02:51.690006777Z" level=info msg="CreateContainer within sandbox \"232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 13:02:51.734603 containerd[1464]: time="2026-03-02T13:02:51.734503080Z" level=info msg="CreateContainer within sandbox \"232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"37db9cef6690a295f8d946e8790fed2bd4136aa4895405ca624d057db0f3a1a2\"" Mar 2 13:02:51.735706 containerd[1464]: time="2026-03-02T13:02:51.735655988Z" level=info msg="StartContainer for \"37db9cef6690a295f8d946e8790fed2bd4136aa4895405ca624d057db0f3a1a2\"" Mar 2 13:02:51.777836 systemd[1]: Started cri-containerd-37db9cef6690a295f8d946e8790fed2bd4136aa4895405ca624d057db0f3a1a2.scope - libcontainer container 37db9cef6690a295f8d946e8790fed2bd4136aa4895405ca624d057db0f3a1a2. Mar 2 13:02:51.869453 containerd[1464]: time="2026-03-02T13:02:51.869372873Z" level=info msg="StartContainer for \"37db9cef6690a295f8d946e8790fed2bd4136aa4895405ca624d057db0f3a1a2\" returns successfully" Mar 2 13:02:51.871745 containerd[1464]: time="2026-03-02T13:02:51.871644347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 13:02:51.981312 kubelet[2583]: I0302 13:02:51.981118 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j7c7n" podStartSLOduration=19.640812982 podStartE2EDuration="29.98109609s" podCreationTimestamp="2026-03-02 13:02:22 +0000 UTC" firstStartedPulling="2026-03-02 13:02:40.651110721 +0000 UTC m=+45.531292248" lastFinishedPulling="2026-03-02 13:02:50.991393829 +0000 UTC m=+55.871575356" observedRunningTime="2026-03-02 13:02:51.980165094 +0000 UTC m=+56.860346622" watchObservedRunningTime="2026-03-02 13:02:51.98109609 +0000 UTC m=+56.861277617" Mar 2 13:02:52.199818 kubelet[2583]: I0302 13:02:52.199521 2583 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 13:02:52.202058 kubelet[2583]: I0302 13:02:52.201760 2583 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 13:02:52.813370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328124627.mount: Deactivated successfully. Mar 2 13:02:52.840214 containerd[1464]: time="2026-03-02T13:02:52.840110439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:52.841505 containerd[1464]: time="2026-03-02T13:02:52.841192370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=17599119" Mar 2 13:02:52.843486 containerd[1464]: time="2026-03-02T13:02:52.842853673Z" level=info msg="ImageCreate event name:\"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:52.847492 containerd[1464]: time="2026-03-02T13:02:52.846319549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:02:52.847959 containerd[1464]: time="2026-03-02T13:02:52.847831127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"17598949\" in 976.125244ms" Mar 2 13:02:52.848089 containerd[1464]: time="2026-03-02T13:02:52.847968372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\"" Mar 2 13:02:52.856236 containerd[1464]: time="2026-03-02T13:02:52.856158667Z" level=info msg="CreateContainer within sandbox \"232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 13:02:52.874032 containerd[1464]: time="2026-03-02T13:02:52.873962822Z" level=info msg="CreateContainer within sandbox \"232a1382eb826100acbd5bf24e6bcddf97b75442aa031e971ea54fcf33d5befb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2179139236ceb13b7235fe5f53ca0474c863139f7128b6178d55f6347aa6d602\"" Mar 2 13:02:52.875022 containerd[1464]: time="2026-03-02T13:02:52.874959508Z" level=info msg="StartContainer for \"2179139236ceb13b7235fe5f53ca0474c863139f7128b6178d55f6347aa6d602\"" Mar 2 13:02:52.943987 systemd[1]: Started cri-containerd-2179139236ceb13b7235fe5f53ca0474c863139f7128b6178d55f6347aa6d602.scope - libcontainer container 2179139236ceb13b7235fe5f53ca0474c863139f7128b6178d55f6347aa6d602. Mar 2 13:02:53.001370 containerd[1464]: time="2026-03-02T13:02:53.001247693Z" level=info msg="StartContainer for \"2179139236ceb13b7235fe5f53ca0474c863139f7128b6178d55f6347aa6d602\" returns successfully" Mar 2 13:02:53.996241 kubelet[2583]: I0302 13:02:53.996170 2583 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65f85fcb9c-p677g" podStartSLOduration=3.445482927 podStartE2EDuration="12.99615183s" podCreationTimestamp="2026-03-02 13:02:41 +0000 UTC" firstStartedPulling="2026-03-02 13:02:43.298222324 +0000 UTC m=+48.178403850" lastFinishedPulling="2026-03-02 13:02:52.848891226 +0000 UTC m=+57.729072753" observedRunningTime="2026-03-02 13:02:53.995764308 +0000 UTC m=+58.875945835" watchObservedRunningTime="2026-03-02 13:02:53.99615183 +0000 UTC m=+58.876333357" Mar 2 13:02:54.504864 kubelet[2583]: I0302 13:02:54.504751 2583 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:02:55.296494 containerd[1464]: time="2026-03-02T13:02:55.296443861Z" level=info msg="StopPodSandbox for \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\"" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.403 [WARNING][5354] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" WorkloadEndpoint="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.403 [INFO][5354] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.403 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" iface="eth0" netns="" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.403 [INFO][5354] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.403 [INFO][5354] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.479 [INFO][5362] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.479 [INFO][5362] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.479 [INFO][5362] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.487 [WARNING][5362] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.487 [INFO][5362] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.490 [INFO][5362] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:55.496768 containerd[1464]: 2026-03-02 13:02:55.493 [INFO][5354] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.497167 containerd[1464]: time="2026-03-02T13:02:55.496795955Z" level=info msg="TearDown network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\" successfully" Mar 2 13:02:55.497167 containerd[1464]: time="2026-03-02T13:02:55.496821533Z" level=info msg="StopPodSandbox for \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\" returns successfully" Mar 2 13:02:55.552329 containerd[1464]: time="2026-03-02T13:02:55.552130143Z" level=info msg="RemovePodSandbox for \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\"" Mar 2 13:02:55.556344 containerd[1464]: time="2026-03-02T13:02:55.556261667Z" level=info msg="Forcibly stopping sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\"" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.609 [WARNING][5380] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" WorkloadEndpoint="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.609 [INFO][5380] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.609 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" iface="eth0" netns="" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.609 [INFO][5380] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.609 [INFO][5380] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.650 [INFO][5388] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.650 [INFO][5388] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.650 [INFO][5388] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.659 [WARNING][5388] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.660 [INFO][5388] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" HandleID="k8s-pod-network.f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Workload="localhost-k8s-whisker--7b789df455--9t8cg-eth0" Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.662 [INFO][5388] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:55.670139 containerd[1464]: 2026-03-02 13:02:55.666 [INFO][5380] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7" Mar 2 13:02:55.670709 containerd[1464]: time="2026-03-02T13:02:55.670188544Z" level=info msg="TearDown network for sandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\" successfully" Mar 2 13:02:55.693632 containerd[1464]: time="2026-03-02T13:02:55.691653927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:02:55.696235 containerd[1464]: time="2026-03-02T13:02:55.695440216Z" level=info msg="RemovePodSandbox \"f94758a89f7d335dfdd9dbe9a0d6964572f77463f88c9aa2e97c5aa9fea560f7\" returns successfully" Mar 2 13:02:55.708554 containerd[1464]: time="2026-03-02T13:02:55.708469981Z" level=info msg="StopPodSandbox for \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\"" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.798 [WARNING][5406] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"62dbc2a5-2b7d-4fc7-af17-38dcb97940da", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6", Pod:"goldmane-54d7f6b6d6-5zs8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a03c6b168", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.799 [INFO][5406] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.799 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" iface="eth0" netns="" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.799 [INFO][5406] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.799 [INFO][5406] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.845 [INFO][5414] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.845 [INFO][5414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.845 [INFO][5414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.852 [WARNING][5414] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.852 [INFO][5414] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.855 [INFO][5414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:55.861504 containerd[1464]: 2026-03-02 13:02:55.858 [INFO][5406] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:55.862492 containerd[1464]: time="2026-03-02T13:02:55.861554025Z" level=info msg="TearDown network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\" successfully" Mar 2 13:02:55.862492 containerd[1464]: time="2026-03-02T13:02:55.861640838Z" level=info msg="StopPodSandbox for \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\" returns successfully" Mar 2 13:02:55.862707 containerd[1464]: time="2026-03-02T13:02:55.862556205Z" level=info msg="RemovePodSandbox for \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\"" Mar 2 13:02:55.862707 containerd[1464]: time="2026-03-02T13:02:55.862680476Z" level=info msg="Forcibly stopping sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\"" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:55.954 [WARNING][5431] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0", GenerateName:"goldmane-54d7f6b6d6-", Namespace:"calico-system", SelfLink:"", UID:"62dbc2a5-2b7d-4fc7-af17-38dcb97940da", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d7f6b6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06d4ff8082d1fa4cc13a541ae2e681458c0f91370cbb0794da1800945beac2e6", Pod:"goldmane-54d7f6b6d6-5zs8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a03c6b168", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:55.954 [INFO][5431] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:55.954 [INFO][5431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" iface="eth0" netns="" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:55.954 [INFO][5431] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:55.955 [INFO][5431] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:56.002 [INFO][5440] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:56.002 [INFO][5440] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:56.002 [INFO][5440] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:56.011 [WARNING][5440] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:56.011 [INFO][5440] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" HandleID="k8s-pod-network.8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Workload="localhost-k8s-goldmane--54d7f6b6d6--5zs8p-eth0" Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:56.017 [INFO][5440] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.032004 containerd[1464]: 2026-03-02 13:02:56.029 [INFO][5431] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c" Mar 2 13:02:56.032844 containerd[1464]: time="2026-03-02T13:02:56.032263507Z" level=info msg="TearDown network for sandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\" successfully" Mar 2 13:02:56.037217 containerd[1464]: time="2026-03-02T13:02:56.037133261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:02:56.037291 containerd[1464]: time="2026-03-02T13:02:56.037244729Z" level=info msg="RemovePodSandbox \"8baa96666c5b3894b8ae11968156ba70ffe5edbab54e1cfedb64c4e2ce934a2c\" returns successfully" Mar 2 13:02:56.038051 containerd[1464]: time="2026-03-02T13:02:56.038016466Z" level=info msg="StopPodSandbox for \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\"" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.087 [WARNING][5457] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0", GenerateName:"calico-kube-controllers-7f6b574974-", Namespace:"calico-system", SelfLink:"", UID:"10ddd28d-60cc-4ef0-9da9-c597c406cf38", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6b574974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245", Pod:"calico-kube-controllers-7f6b574974-gkgnz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d5c75bbfed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.088 [INFO][5457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.088 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" iface="eth0" netns="" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.088 [INFO][5457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.088 [INFO][5457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.125 [INFO][5465] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.126 [INFO][5465] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.126 [INFO][5465] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.134 [WARNING][5465] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.134 [INFO][5465] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.137 [INFO][5465] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.143189 containerd[1464]: 2026-03-02 13:02:56.140 [INFO][5457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.143189 containerd[1464]: time="2026-03-02T13:02:56.143142462Z" level=info msg="TearDown network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\" successfully" Mar 2 13:02:56.143189 containerd[1464]: time="2026-03-02T13:02:56.143178560Z" level=info msg="StopPodSandbox for \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\" returns successfully" Mar 2 13:02:56.144107 containerd[1464]: time="2026-03-02T13:02:56.144045634Z" level=info msg="RemovePodSandbox for \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\"" Mar 2 13:02:56.144143 containerd[1464]: time="2026-03-02T13:02:56.144109322Z" level=info msg="Forcibly stopping sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\"" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.201 [WARNING][5484] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0", GenerateName:"calico-kube-controllers-7f6b574974-", Namespace:"calico-system", SelfLink:"", UID:"10ddd28d-60cc-4ef0-9da9-c597c406cf38", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6b574974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4311ba9198957a354fce6c3fdc428afecc95673d86a2d9e743b802113b59e245", Pod:"calico-kube-controllers-7f6b574974-gkgnz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d5c75bbfed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.202 [INFO][5484] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.202 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" iface="eth0" netns="" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.202 [INFO][5484] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.202 [INFO][5484] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.263 [INFO][5492] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.263 [INFO][5492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.263 [INFO][5492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.274 [WARNING][5492] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.274 [INFO][5492] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" HandleID="k8s-pod-network.928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Workload="localhost-k8s-calico--kube--controllers--7f6b574974--gkgnz-eth0" Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.277 [INFO][5492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.282904 containerd[1464]: 2026-03-02 13:02:56.280 [INFO][5484] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08" Mar 2 13:02:56.282904 containerd[1464]: time="2026-03-02T13:02:56.282901121Z" level=info msg="TearDown network for sandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\" successfully" Mar 2 13:02:56.297453 containerd[1464]: time="2026-03-02T13:02:56.297332809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:02:56.297453 containerd[1464]: time="2026-03-02T13:02:56.297438097Z" level=info msg="RemovePodSandbox \"928ba544e32ed0425f307da0692c9e035a667e2aaa54d61256ebc94c31cc2c08\" returns successfully" Mar 2 13:02:56.298231 containerd[1464]: time="2026-03-02T13:02:56.298211658Z" level=info msg="StopPodSandbox for \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\"" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.356 [WARNING][5510] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wk98t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fcbd4ee9-3b15-40c9-804b-4103bd692429", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8", Pod:"coredns-66bc5c9577-wk98t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic24078ab4b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.356 [INFO][5510] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.356 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" iface="eth0" netns="" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.356 [INFO][5510] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.356 [INFO][5510] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.388 [INFO][5519] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.388 [INFO][5519] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.388 [INFO][5519] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.398 [WARNING][5519] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.399 [INFO][5519] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.401 [INFO][5519] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.406632 containerd[1464]: 2026-03-02 13:02:56.403 [INFO][5510] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.407120 containerd[1464]: time="2026-03-02T13:02:56.406632071Z" level=info msg="TearDown network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\" successfully" Mar 2 13:02:56.407120 containerd[1464]: time="2026-03-02T13:02:56.406665984Z" level=info msg="StopPodSandbox for \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\" returns successfully" Mar 2 13:02:56.407550 containerd[1464]: time="2026-03-02T13:02:56.407426657Z" level=info msg="RemovePodSandbox for \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\"" Mar 2 13:02:56.407550 containerd[1464]: time="2026-03-02T13:02:56.407542433Z" level=info msg="Forcibly stopping sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\"" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.464 [WARNING][5538] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wk98t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fcbd4ee9-3b15-40c9-804b-4103bd692429", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"694045cce4511a09b3ea90b9633e2451f6a4d8d6c098ea901d15fd3c9ac3b5a8", Pod:"coredns-66bc5c9577-wk98t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic24078ab4b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.465 [INFO][5538] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.465 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" iface="eth0" netns="" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.465 [INFO][5538] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.465 [INFO][5538] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.499 [INFO][5546] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.499 [INFO][5546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.499 [INFO][5546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.508 [WARNING][5546] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.508 [INFO][5546] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" HandleID="k8s-pod-network.b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Workload="localhost-k8s-coredns--66bc5c9577--wk98t-eth0" Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.510 [INFO][5546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.522292 containerd[1464]: 2026-03-02 13:02:56.514 [INFO][5538] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634" Mar 2 13:02:56.522292 containerd[1464]: time="2026-03-02T13:02:56.522290724Z" level=info msg="TearDown network for sandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\" successfully" Mar 2 13:02:56.528950 containerd[1464]: time="2026-03-02T13:02:56.528872726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:02:56.529046 containerd[1464]: time="2026-03-02T13:02:56.529002719Z" level=info msg="RemovePodSandbox \"b2f2cf0109ec663595bcdb979ba34ac9ccdc4bb0244853a1f14df6e962464634\" returns successfully" Mar 2 13:02:56.529840 containerd[1464]: time="2026-03-02T13:02:56.529801618Z" level=info msg="StopPodSandbox for \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\"" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.579 [WARNING][5565] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"949cbf83-187d-4070-b9b2-980988cef53d", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac", Pod:"calico-apiserver-866b8f6974-h9zgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3317be63288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.580 [INFO][5565] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.580 [INFO][5565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" iface="eth0" netns="" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.580 [INFO][5565] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.580 [INFO][5565] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.623 [INFO][5573] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.623 [INFO][5573] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.623 [INFO][5573] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.631 [WARNING][5573] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.631 [INFO][5573] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.635 [INFO][5573] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.641647 containerd[1464]: 2026-03-02 13:02:56.638 [INFO][5565] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.642241 containerd[1464]: time="2026-03-02T13:02:56.641690322Z" level=info msg="TearDown network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\" successfully" Mar 2 13:02:56.642241 containerd[1464]: time="2026-03-02T13:02:56.641717413Z" level=info msg="StopPodSandbox for \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\" returns successfully" Mar 2 13:02:56.642523 containerd[1464]: time="2026-03-02T13:02:56.642442870Z" level=info msg="RemovePodSandbox for \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\"" Mar 2 13:02:56.642523 containerd[1464]: time="2026-03-02T13:02:56.642490639Z" level=info msg="Forcibly stopping sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\"" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.705 [WARNING][5592] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"949cbf83-187d-4070-b9b2-980988cef53d", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f22de85836669254c5baa968f667bcbf6b1761ad9ea353c5867fefc4787e6ac", Pod:"calico-apiserver-866b8f6974-h9zgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3317be63288", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.705 [INFO][5592] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.705 [INFO][5592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" iface="eth0" netns="" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.705 [INFO][5592] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.705 [INFO][5592] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.755 [INFO][5601] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.755 [INFO][5601] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.755 [INFO][5601] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.767 [WARNING][5601] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.768 [INFO][5601] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" HandleID="k8s-pod-network.ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Workload="localhost-k8s-calico--apiserver--866b8f6974--h9zgx-eth0" Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.770 [INFO][5601] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.778208 containerd[1464]: 2026-03-02 13:02:56.774 [INFO][5592] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189" Mar 2 13:02:56.778208 containerd[1464]: time="2026-03-02T13:02:56.778169110Z" level=info msg="TearDown network for sandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\" successfully" Mar 2 13:02:56.785088 containerd[1464]: time="2026-03-02T13:02:56.784999696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:02:56.785170 containerd[1464]: time="2026-03-02T13:02:56.785126453Z" level=info msg="RemovePodSandbox \"ac64ccc2757b235b7600e6c742dd0a7b0a05d6961eb4d086a46ade2fbb11d189\" returns successfully" Mar 2 13:02:56.786054 containerd[1464]: time="2026-03-02T13:02:56.785958962Z" level=info msg="StopPodSandbox for \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\"" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.865 [WARNING][5619] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wtgch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d871cd79-c660-4da5-b55c-eb8af06ee43e", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289", Pod:"coredns-66bc5c9577-wtgch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib58ac6f9865", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.865 [INFO][5619] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.866 [INFO][5619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" iface="eth0" netns="" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.866 [INFO][5619] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.866 [INFO][5619] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.909 [INFO][5628] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.909 [INFO][5628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.910 [INFO][5628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.922 [WARNING][5628] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.923 [INFO][5628] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.927 [INFO][5628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:56.932641 containerd[1464]: 2026-03-02 13:02:56.929 [INFO][5619] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:56.933135 containerd[1464]: time="2026-03-02T13:02:56.932688719Z" level=info msg="TearDown network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\" successfully" Mar 2 13:02:56.933135 containerd[1464]: time="2026-03-02T13:02:56.932717523Z" level=info msg="StopPodSandbox for \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\" returns successfully" Mar 2 13:02:56.933475 containerd[1464]: time="2026-03-02T13:02:56.933442958Z" level=info msg="RemovePodSandbox for \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\"" Mar 2 13:02:56.933816 containerd[1464]: time="2026-03-02T13:02:56.933481029Z" level=info msg="Forcibly stopping sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\"" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:56.989 [WARNING][5646] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wtgch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d871cd79-c660-4da5-b55c-eb8af06ee43e", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4920ba51c176cc3111a602a5248b92386749be3a0ef55d1f73c0c837d1e47289", Pod:"coredns-66bc5c9577-wtgch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib58ac6f9865", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:56.989 [INFO][5646] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:56.989 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" iface="eth0" netns="" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:56.989 [INFO][5646] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:56.989 [INFO][5646] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:57.045 [INFO][5655] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:57.045 [INFO][5655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:57.045 [INFO][5655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:57.053 [WARNING][5655] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:57.053 [INFO][5655] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" HandleID="k8s-pod-network.5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Workload="localhost-k8s-coredns--66bc5c9577--wtgch-eth0" Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:57.055 [INFO][5655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:57.063415 containerd[1464]: 2026-03-02 13:02:57.059 [INFO][5646] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645" Mar 2 13:02:57.063415 containerd[1464]: time="2026-03-02T13:02:57.063319996Z" level=info msg="TearDown network for sandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\" successfully" Mar 2 13:02:57.070062 containerd[1464]: time="2026-03-02T13:02:57.069988990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:02:57.070231 containerd[1464]: time="2026-03-02T13:02:57.070110898Z" level=info msg="RemovePodSandbox \"5bc0de8090339a36564db8a16ef2ab1c56aaf94d54c55af5d24299ff1682c645\" returns successfully" Mar 2 13:02:57.070996 containerd[1464]: time="2026-03-02T13:02:57.070959167Z" level=info msg="StopPodSandbox for \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\"" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.140 [WARNING][5674] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"ec9dc82b-9518-4a49-8110-199c802cf620", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82", Pod:"calico-apiserver-866b8f6974-dbn4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicd024e21758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.140 [INFO][5674] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.140 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" iface="eth0" netns="" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.140 [INFO][5674] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.141 [INFO][5674] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.180 [INFO][5683] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.180 [INFO][5683] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.180 [INFO][5683] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.195 [WARNING][5683] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.196 [INFO][5683] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.200 [INFO][5683] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:57.209787 containerd[1464]: 2026-03-02 13:02:57.205 [INFO][5674] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.210299 containerd[1464]: time="2026-03-02T13:02:57.210023189Z" level=info msg="TearDown network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\" successfully" Mar 2 13:02:57.210299 containerd[1464]: time="2026-03-02T13:02:57.210181574Z" level=info msg="StopPodSandbox for \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\" returns successfully" Mar 2 13:02:57.213050 containerd[1464]: time="2026-03-02T13:02:57.211533776Z" level=info msg="RemovePodSandbox for \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\"" Mar 2 13:02:57.213050 containerd[1464]: time="2026-03-02T13:02:57.212402355Z" level=info msg="Forcibly stopping sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\"" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.291 [WARNING][5700] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0", GenerateName:"calico-apiserver-866b8f6974-", Namespace:"calico-system", SelfLink:"", UID:"ec9dc82b-9518-4a49-8110-199c802cf620", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 2, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b8f6974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f768a155cd877cedb038662fac705d950050d0ae7bb5c2defc6d6dc6f6e2a82", Pod:"calico-apiserver-866b8f6974-dbn4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calicd024e21758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.292 [INFO][5700] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.292 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" iface="eth0" netns="" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.292 [INFO][5700] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.293 [INFO][5700] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.352 [INFO][5708] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.353 [INFO][5708] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.354 [INFO][5708] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.364 [WARNING][5708] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.365 [INFO][5708] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" HandleID="k8s-pod-network.c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Workload="localhost-k8s-calico--apiserver--866b8f6974--dbn4b-eth0" Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.367 [INFO][5708] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:02:57.373493 containerd[1464]: 2026-03-02 13:02:57.370 [INFO][5700] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7" Mar 2 13:02:57.374372 containerd[1464]: time="2026-03-02T13:02:57.373500898Z" level=info msg="TearDown network for sandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\" successfully" Mar 2 13:02:57.379439 containerd[1464]: time="2026-03-02T13:02:57.379363803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:02:57.379884 containerd[1464]: time="2026-03-02T13:02:57.379844488Z" level=info msg="RemovePodSandbox \"c98c480fe7b2b8a5172bc3cff37aad92395d9abb706e3d45ee0de266b08317c7\" returns successfully" Mar 2 13:03:01.666143 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:49638.service - OpenSSH per-connection server daemon (10.0.0.1:49638). Mar 2 13:03:01.760740 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 49638 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:01.766553 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:01.782621 systemd-logind[1447]: New session 8 of user core. Mar 2 13:03:01.789788 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:03:02.284805 sshd[5719]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:02.290037 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:49638.service: Deactivated successfully. Mar 2 13:03:02.292745 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:03:02.293973 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:03:02.295966 systemd-logind[1447]: Removed session 8. Mar 2 13:03:07.313804 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:46170.service - OpenSSH per-connection server daemon (10.0.0.1:46170). Mar 2 13:03:07.363139 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 46170 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:07.365508 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:07.371999 systemd-logind[1447]: New session 9 of user core. Mar 2 13:03:07.381866 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:03:07.548962 sshd[5765]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:07.554383 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:46170.service: Deactivated successfully. Mar 2 13:03:07.557458 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:03:07.558653 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:03:07.560504 systemd-logind[1447]: Removed session 9. Mar 2 13:03:10.342712 kubelet[2583]: E0302 13:03:10.342605 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:12.566542 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:54646.service - OpenSSH per-connection server daemon (10.0.0.1:54646). Mar 2 13:03:12.664284 sshd[5826]: Accepted publickey for core from 10.0.0.1 port 54646 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:12.667151 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:12.673528 systemd-logind[1447]: New session 10 of user core. Mar 2 13:03:12.690996 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:03:12.861643 sshd[5826]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:12.867389 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:54646.service: Deactivated successfully. Mar 2 13:03:12.869614 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:03:12.870758 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:03:12.873142 systemd-logind[1447]: Removed session 10. Mar 2 13:03:16.908141 systemd[1]: run-containerd-runc-k8s.io-05fdbd6fac82c5e6855a7d3903ab65bc95915c9ca5f3075b2157f548b582d73d-runc.4hAZo5.mount: Deactivated successfully. Mar 2 13:03:17.340664 kubelet[2583]: E0302 13:03:17.340487 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:17.880381 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:54652.service - OpenSSH per-connection server daemon (10.0.0.1:54652). Mar 2 13:03:17.948989 sshd[5885]: Accepted publickey for core from 10.0.0.1 port 54652 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:17.951437 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:17.959660 systemd-logind[1447]: New session 11 of user core. Mar 2 13:03:17.965891 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:03:18.162919 sshd[5885]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:18.167075 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:54652.service: Deactivated successfully. Mar 2 13:03:18.169695 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:03:18.172233 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:03:18.173713 systemd-logind[1447]: Removed session 11. Mar 2 13:03:18.339850 kubelet[2583]: E0302 13:03:18.339777 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:23.180174 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:42072.service - OpenSSH per-connection server daemon (10.0.0.1:42072). Mar 2 13:03:23.255245 sshd[5925]: Accepted publickey for core from 10.0.0.1 port 42072 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:23.257617 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:23.264125 systemd-logind[1447]: New session 12 of user core. Mar 2 13:03:23.271825 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:03:23.437818 sshd[5925]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:23.442798 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:42072.service: Deactivated successfully. Mar 2 13:03:23.445923 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:03:23.447062 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:03:23.448784 systemd-logind[1447]: Removed session 12. Mar 2 13:03:25.341063 kubelet[2583]: E0302 13:03:25.340975 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:03:28.452762 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:42074.service - OpenSSH per-connection server daemon (10.0.0.1:42074). Mar 2 13:03:28.538850 sshd[5957]: Accepted publickey for core from 10.0.0.1 port 42074 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:28.541164 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:28.547857 systemd-logind[1447]: New session 13 of user core. Mar 2 13:03:28.557813 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:03:28.713787 sshd[5957]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:28.718407 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:42074.service: Deactivated successfully. Mar 2 13:03:28.720924 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:03:28.722140 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:03:28.724152 systemd-logind[1447]: Removed session 13. Mar 2 13:03:33.728613 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:35170.service - OpenSSH per-connection server daemon (10.0.0.1:35170). Mar 2 13:03:33.844397 sshd[5975]: Accepted publickey for core from 10.0.0.1 port 35170 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:33.846737 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:33.853252 systemd-logind[1447]: New session 14 of user core. Mar 2 13:03:33.860874 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:03:34.001515 sshd[5975]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:34.014791 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:35170.service: Deactivated successfully. Mar 2 13:03:34.018993 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:03:34.022345 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:03:34.030830 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:35182.service - OpenSSH per-connection server daemon (10.0.0.1:35182). Mar 2 13:03:34.032682 systemd-logind[1447]: Removed session 14. Mar 2 13:03:34.072054 sshd[5990]: Accepted publickey for core from 10.0.0.1 port 35182 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:34.074241 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:34.080510 systemd-logind[1447]: New session 15 of user core. Mar 2 13:03:34.090803 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:03:34.453135 sshd[5990]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:34.466481 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:35182.service: Deactivated successfully. Mar 2 13:03:34.469869 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:03:34.474174 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:03:34.486320 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:35194.service - OpenSSH per-connection server daemon (10.0.0.1:35194). Mar 2 13:03:34.489113 systemd-logind[1447]: Removed session 15. Mar 2 13:03:34.547682 sshd[6010]: Accepted publickey for core from 10.0.0.1 port 35194 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:34.550485 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:34.557430 systemd-logind[1447]: New session 16 of user core. Mar 2 13:03:34.565910 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:03:34.735308 sshd[6010]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:34.740673 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:35194.service: Deactivated successfully. Mar 2 13:03:34.742856 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:03:34.743878 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:03:34.745513 systemd-logind[1447]: Removed session 16. Mar 2 13:03:39.762787 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:35210.service - OpenSSH per-connection server daemon (10.0.0.1:35210). Mar 2 13:03:39.804421 sshd[6033]: Accepted publickey for core from 10.0.0.1 port 35210 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:39.806865 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:39.825227 systemd-logind[1447]: New session 17 of user core. Mar 2 13:03:39.834806 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:03:40.000487 sshd[6033]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:40.006188 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:35210.service: Deactivated successfully. Mar 2 13:03:40.009086 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:03:40.011555 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:03:40.015226 systemd-logind[1447]: Removed session 17. Mar 2 13:03:41.787083 systemd[1]: run-containerd-runc-k8s.io-91ef8776e339a50fd85421bfc119e1948446873b9b60295a117ca31c5084c652-runc.1D0skh.mount: Deactivated successfully. Mar 2 13:03:45.015630 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:42522.service - OpenSSH per-connection server daemon (10.0.0.1:42522). Mar 2 13:03:45.062209 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 42522 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:45.064208 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:45.070337 systemd-logind[1447]: New session 18 of user core. Mar 2 13:03:45.075281 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:03:45.223798 sshd[6069]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:45.245125 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:42522.service: Deactivated successfully. Mar 2 13:03:45.247981 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:03:45.250291 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:03:45.258164 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:42534.service - OpenSSH per-connection server daemon (10.0.0.1:42534). Mar 2 13:03:45.259494 systemd-logind[1447]: Removed session 18. Mar 2 13:03:45.297299 sshd[6083]: Accepted publickey for core from 10.0.0.1 port 42534 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:45.299715 sshd[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:45.305286 systemd-logind[1447]: New session 19 of user core. Mar 2 13:03:45.319876 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:03:45.705495 sshd[6083]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:45.714537 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:42534.service: Deactivated successfully. Mar 2 13:03:45.721725 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:03:45.723768 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:03:45.735009 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:42538.service - OpenSSH per-connection server daemon (10.0.0.1:42538). Mar 2 13:03:45.736455 systemd-logind[1447]: Removed session 19. Mar 2 13:03:45.788367 sshd[6096]: Accepted publickey for core from 10.0.0.1 port 42538 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:45.790710 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:45.797059 systemd-logind[1447]: New session 20 of user core. Mar 2 13:03:45.808845 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:03:46.516753 sshd[6096]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:46.526938 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:42538.service: Deactivated successfully. Mar 2 13:03:46.530559 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:03:46.535648 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:03:46.543917 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:42554.service - OpenSSH per-connection server daemon (10.0.0.1:42554). Mar 2 13:03:46.549250 systemd-logind[1447]: Removed session 20. Mar 2 13:03:46.597748 sshd[6123]: Accepted publickey for core from 10.0.0.1 port 42554 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:46.599802 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:46.605420 systemd-logind[1447]: New session 21 of user core. Mar 2 13:03:46.616012 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:03:46.931464 systemd[1]: run-containerd-runc-k8s.io-eeb40cb1df16e9265a5f54518863efda1cea60c691a9de7c8cec7be8d062dd3e-runc.1XRvnp.mount: Deactivated successfully. Mar 2 13:03:46.974683 sshd[6123]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:46.981271 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:42554.service: Deactivated successfully. Mar 2 13:03:46.987134 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:03:46.990687 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:03:47.001508 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:42556.service - OpenSSH per-connection server daemon (10.0.0.1:42556). Mar 2 13:03:47.003403 systemd-logind[1447]: Removed session 21. Mar 2 13:03:47.079599 sshd[6173]: Accepted publickey for core from 10.0.0.1 port 42556 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:47.083236 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:47.089745 systemd-logind[1447]: New session 22 of user core. Mar 2 13:03:47.101939 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:03:47.243861 sshd[6173]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:47.247699 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:42556.service: Deactivated successfully. Mar 2 13:03:47.251189 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:03:47.253639 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:03:47.255024 systemd-logind[1447]: Removed session 22. Mar 2 13:03:52.258213 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:48292.service - OpenSSH per-connection server daemon (10.0.0.1:48292). Mar 2 13:03:52.298099 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 48292 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:52.299948 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:52.305506 systemd-logind[1447]: New session 23 of user core. Mar 2 13:03:52.321784 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:03:52.484665 sshd[6194]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:52.488959 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:48292.service: Deactivated successfully. Mar 2 13:03:52.491824 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:03:52.492898 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:03:52.494295 systemd-logind[1447]: Removed session 23. Mar 2 13:03:57.517247 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:48304.service - OpenSSH per-connection server daemon (10.0.0.1:48304). Mar 2 13:03:57.562798 sshd[6212]: Accepted publickey for core from 10.0.0.1 port 48304 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:03:57.565325 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:03:57.573732 systemd-logind[1447]: New session 24 of user core. Mar 2 13:03:57.583895 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:03:57.754134 sshd[6212]: pam_unix(sshd:session): session closed for user core Mar 2 13:03:57.759089 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:48304.service: Deactivated successfully. Mar 2 13:03:57.761758 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:03:57.762828 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:03:57.764547 systemd-logind[1447]: Removed session 24. Mar 2 13:04:02.776715 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:57200.service - OpenSSH per-connection server daemon (10.0.0.1:57200). Mar 2 13:04:02.840511 sshd[6229]: Accepted publickey for core from 10.0.0.1 port 57200 ssh2: RSA SHA256:I7frh5Ho+GNZYlhwMF3Kg7xi/C+xdSmVTMEFrO7Zj60 Mar 2 13:04:02.841536 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:04:02.848453 systemd-logind[1447]: New session 25 of user core. Mar 2 13:04:02.852877 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:04:03.008960 sshd[6229]: pam_unix(sshd:session): session closed for user core Mar 2 13:04:03.017231 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:57200.service: Deactivated successfully. Mar 2 13:04:03.022363 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:04:03.024302 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:04:03.026690 systemd-logind[1447]: Removed session 25. Mar 2 13:04:03.341197 kubelet[2583]: E0302 13:04:03.340963 2583 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"