Mar 14 00:39:19.280467 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:39:19.280490 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:39:19.280502 kernel: BIOS-provided physical RAM map: Mar 14 00:39:19.280508 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:39:19.280514 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 14 00:39:19.280519 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 14 00:39:19.280526 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 14 00:39:19.280532 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 14 00:39:19.280537 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 14 00:39:19.280543 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 14 00:39:19.280552 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 14 00:39:19.280558 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 14 00:39:19.280564 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 14 00:39:19.280569 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 14 00:39:19.281176 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 14 00:39:19.281185 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 14 00:39:19.281196 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 14 00:39:19.281202 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 14 00:39:19.281208 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 14 00:39:19.281214 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:39:19.281220 kernel: NX (Execute Disable) protection: active Mar 14 00:39:19.281226 kernel: APIC: Static calls initialized Mar 14 00:39:19.281232 kernel: efi: EFI v2.7 by EDK II Mar 14 00:39:19.281238 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 14 00:39:19.281244 kernel: SMBIOS 2.8 present. Mar 14 00:39:19.281250 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 14 00:39:19.281256 kernel: Hypervisor detected: KVM Mar 14 00:39:19.281265 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:39:19.281271 kernel: kvm-clock: using sched offset of 19113359106 cycles Mar 14 00:39:19.281278 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:39:19.281285 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:39:19.281291 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:39:19.281298 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:39:19.281304 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 14 00:39:19.281310 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:39:19.281317 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:39:19.281326 kernel: Using GB pages for direct mapping Mar 14 00:39:19.281332 kernel: Secure boot disabled Mar 14 00:39:19.281338 kernel: ACPI: Early table checksum verification disabled Mar 14 00:39:19.281345 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 14 00:39:19.281355 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:39:19.281368 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:39:19.281380 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:39:19.281396 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 14 00:39:19.281406 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:39:19.281415 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:39:19.281424 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:39:19.281433 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:39:19.281442 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:39:19.281451 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 14 00:39:19.281467 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 14 00:39:19.281476 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 14 00:39:19.281485 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 14 00:39:19.281495 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 14 00:39:19.281507 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 14 00:39:19.281516 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 14 00:39:19.281525 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 14 00:39:19.281534 kernel: No NUMA configuration found Mar 14 00:39:19.281543 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 14 00:39:19.281555 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 14 00:39:19.281565 kernel: Zone ranges: Mar 14 00:39:19.281575 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:39:19.281584 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 14 00:39:19.281593 kernel: Normal empty Mar 14 00:39:19.281602 kernel: Movable zone start for each node Mar 14 00:39:19.281611 kernel: Early memory node ranges Mar 14 00:39:19.281620 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:39:19.281629 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 14 00:39:19.281642 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 14 00:39:19.281651 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 14 00:39:19.281663 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 14 00:39:19.281674 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 14 00:39:19.282270 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 14 00:39:19.282284 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:39:19.282295 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:39:19.282305 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 14 00:39:19.282316 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:39:19.282326 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 14 00:39:19.282342 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 14 00:39:19.282350 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 14 00:39:19.282360 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:39:19.282369 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:39:19.282378 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:39:19.282387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:39:19.282396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:39:19.282407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:39:19.282417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:39:19.282429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:39:19.282438 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:39:19.282447 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:39:19.282456 kernel: TSC deadline timer available Mar 14 00:39:19.282465 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:39:19.282474 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:39:19.282483 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:39:19.282493 kernel: kvm-guest: setup PV sched yield Mar 14 00:39:19.282502 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 14 00:39:19.282515 kernel: Booting paravirtualized kernel on KVM Mar 14 00:39:19.282524 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:39:19.282533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:39:19.282543 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:39:19.282555 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:39:19.282566 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:39:19.282575 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:39:19.282584 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:39:19.282594 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:39:19.282607 kernel: random: crng init done Mar 14 00:39:19.282616 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:39:19.282625 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:39:19.282638 kernel: Fallback order for Node 0: 0 Mar 14 00:39:19.282650 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 14 00:39:19.282660 kernel: Policy zone: DMA32 Mar 14 00:39:19.282671 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:39:19.282684 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 14 00:39:19.282700 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:39:19.282813 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:39:19.282829 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:39:19.282894 kernel: Dynamic Preempt: voluntary Mar 14 00:39:19.282907 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:39:19.282939 kernel: rcu: RCU event tracing is enabled. Mar 14 00:39:19.282955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:39:19.282969 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:39:19.282981 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:39:19.282993 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:39:19.283006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:39:19.283017 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:39:19.283035 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:39:19.283049 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:39:19.283060 kernel: Console: colour dummy device 80x25 Mar 14 00:39:19.283072 kernel: printk: console [ttyS0] enabled Mar 14 00:39:19.283123 kernel: ACPI: Core revision 20230628 Mar 14 00:39:19.283136 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:39:19.283143 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:39:19.283150 kernel: x2apic enabled Mar 14 00:39:19.283157 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:39:19.283164 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:39:19.283171 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:39:19.283178 kernel: kvm-guest: setup PV IPIs Mar 14 00:39:19.283185 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:39:19.283192 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:39:19.283202 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:39:19.283209 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:39:19.283215 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:39:19.283222 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:39:19.283229 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:39:19.283236 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:39:19.283243 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:39:19.283250 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:39:19.283257 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:39:19.283268 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:39:19.283275 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:39:19.283282 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:39:19.283289 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:39:19.283296 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:39:19.283303 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:39:19.283310 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:39:19.283317 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:39:19.283327 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:39:19.283333 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:39:19.283340 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:39:19.283347 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:39:19.283355 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:39:19.283362 kernel: landlock: Up and running. Mar 14 00:39:19.283368 kernel: SELinux: Initializing. Mar 14 00:39:19.283375 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:39:19.283382 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:39:19.283392 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:39:19.283399 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:39:19.283406 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:39:19.283413 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:39:19.283420 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:39:19.283427 kernel: signal: max sigframe size: 1776 Mar 14 00:39:19.283434 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:39:19.283441 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:39:19.283448 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:39:19.283458 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:39:19.283465 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:39:19.283472 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:39:19.283478 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:39:19.283485 kernel: smpboot: Max logical packages: 1 Mar 14 00:39:19.283492 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:39:19.283499 kernel: devtmpfs: initialized Mar 14 00:39:19.283506 kernel: x86/mm: Memory block size: 128MB Mar 14 00:39:19.283513 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 14 00:39:19.283523 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 14 00:39:19.283529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 14 00:39:19.283536 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 14 00:39:19.283543 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 14 00:39:19.283550 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:39:19.283557 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:39:19.283564 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:39:19.283572 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:39:19.283578 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:39:19.283588 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:39:19.283595 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:39:19.283602 kernel: audit: type=2000 audit(1773448750.964:1): state=initialized audit_enabled=0 res=1 Mar 14 00:39:19.283609 kernel: cpuidle: using governor menu Mar 14 00:39:19.283616 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:39:19.283622 kernel: dca service started, version 1.12.1 Mar 14 00:39:19.283630 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:39:19.283637 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:39:19.283644 kernel: PCI: Using configuration type 1 for base access Mar 14 00:39:19.283654 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:39:19.283661 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:39:19.283668 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:39:19.283675 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:39:19.283682 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:39:19.283688 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:39:19.283695 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:39:19.283702 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:39:19.283792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:39:19.283804 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:39:19.283812 kernel: ACPI: Interpreter enabled Mar 14 00:39:19.283818 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:39:19.283825 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:39:19.283875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:39:19.283884 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:39:19.283892 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:39:19.283898 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:39:19.284295 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:39:19.284579 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:39:19.284823 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:39:19.284882 kernel: PCI host bridge to bus 0000:00 Mar 14 00:39:19.285082 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:39:19.285223 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:39:19.285358 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:39:19.285527 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:39:19.285666 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:39:19.285910 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 14 00:39:19.286051 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:39:19.286306 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:39:19.286684 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:39:19.287170 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 14 00:39:19.287382 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 14 00:39:19.287583 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:39:19.287901 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 14 00:39:19.288101 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:39:19.288554 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:39:19.289095 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 14 00:39:19.289268 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 14 00:39:19.289416 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 14 00:39:19.289663 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:39:19.289934 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 14 00:39:19.290086 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 14 00:39:19.290232 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 14 00:39:19.290454 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:39:19.290614 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 14 00:39:19.290916 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 14 00:39:19.291191 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 14 00:39:19.291346 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 14 00:39:19.291568 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:39:19.292008 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:39:19.292161 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 10742 usecs Mar 14 00:39:19.292622 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:39:19.293020 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 14 00:39:19.293249 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 14 00:39:19.293908 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:39:19.294130 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 14 00:39:19.294151 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:39:19.294166 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:39:19.294187 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:39:19.294197 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:39:19.294211 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:39:19.294221 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:39:19.294234 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:39:19.294245 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:39:19.294258 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:39:19.294270 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:39:19.294282 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:39:19.294301 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:39:19.294313 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:39:19.294326 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:39:19.294337 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:39:19.294350 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:39:19.294363 kernel: iommu: Default domain type: Translated Mar 14 00:39:19.294511 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:39:19.294519 kernel: efivars: Registered efivars operations Mar 14 00:39:19.294526 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:39:19.294539 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:39:19.294546 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 14 00:39:19.294553 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 14 00:39:19.294560 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 14 00:39:19.294566 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 14 00:39:19.294808 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:39:19.295017 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:39:19.295170 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:39:19.295180 kernel: vgaarb: loaded Mar 14 00:39:19.295193 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:39:19.295200 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:39:19.295207 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:39:19.295215 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:39:19.295229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:39:19.295241 kernel: pnp: PnP ACPI init Mar 14 00:39:19.295870 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:39:19.295896 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:39:19.295914 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:39:19.295928 kernel: NET: Registered PF_INET protocol family Mar 14 00:39:19.295940 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:39:19.295953 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:39:19.295965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:39:19.295977 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:39:19.295988 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:39:19.296000 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:39:19.296011 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:39:19.296029 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:39:19.296042 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:39:19.296056 kernel: NET: Registered PF_XDP protocol family Mar 14 00:39:19.296289 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 14 00:39:19.296531 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 14 00:39:19.297160 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:39:19.297359 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:39:19.297552 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:39:19.298222 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:39:19.298437 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:39:19.298680 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 14 00:39:19.298701 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:39:19.298794 kernel: Initialise system trusted keyrings Mar 14 00:39:19.298809 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:39:19.298820 kernel: Key type asymmetric registered Mar 14 00:39:19.298884 kernel: Asymmetric key parser 'x509' registered Mar 14 00:39:19.298898 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:39:19.298919 kernel: io scheduler mq-deadline registered Mar 14 00:39:19.298931 kernel: io scheduler kyber registered Mar 14 00:39:19.298943 kernel: io scheduler bfq registered Mar 14 00:39:19.298955 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:39:19.298969 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:39:19.298981 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:39:19.298993 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:39:19.299005 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:39:19.299018 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:39:19.299037 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:39:19.299048 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:39:19.299059 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:39:19.299362 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:39:19.299377 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:39:19.299522 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:39:19.299662 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:39:17 UTC (1773448757) Mar 14 00:39:19.299993 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:39:19.300014 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:39:19.300021 kernel: efifb: probing for efifb Mar 14 00:39:19.300029 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 14 00:39:19.300036 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 14 00:39:19.300043 kernel: efifb: scrolling: redraw Mar 14 00:39:19.300050 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 14 00:39:19.300057 kernel: Console: switching to colour frame buffer device 100x37 Mar 14 00:39:19.300064 kernel: fb0: EFI VGA frame buffer device Mar 14 00:39:19.300071 kernel: pstore: Using crash dump compression: deflate Mar 14 00:39:19.300081 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:39:19.300088 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:39:19.300095 kernel: Segment Routing with IPv6 Mar 14 00:39:19.300102 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:39:19.300109 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:39:19.300116 kernel: Key type dns_resolver registered Mar 14 00:39:19.300143 kernel: IPI shorthand broadcast: enabled Mar 14 00:39:19.300153 kernel: sched_clock: Marking stable (4744051858, 1539026633)->(8150827142, -1867748651) Mar 14 00:39:19.300161 kernel: registered taskstats version 1 Mar 14 00:39:19.300171 kernel: Loading compiled-in X.509 certificates Mar 14 00:39:19.300179 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:39:19.300186 kernel: Key type .fscrypt registered Mar 14 00:39:19.300235 kernel: Key type fscrypt-provisioning registered Mar 14 00:39:19.300243 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:39:19.300251 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:39:19.300258 kernel: ima: No architecture policies found Mar 14 00:39:19.300265 kernel: clk: Disabling unused clocks Mar 14 00:39:19.300276 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:39:19.300283 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:39:19.300291 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:39:19.300298 kernel: Run /init as init process Mar 14 00:39:19.300305 kernel: with arguments: Mar 14 00:39:19.300313 kernel: /init Mar 14 00:39:19.300320 kernel: with environment: Mar 14 00:39:19.300327 kernel: HOME=/ Mar 14 00:39:19.300334 kernel: TERM=linux Mar 14 00:39:19.300344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:39:19.300356 systemd[1]: Detected virtualization kvm. Mar 14 00:39:19.300364 systemd[1]: Detected architecture x86-64. Mar 14 00:39:19.300372 systemd[1]: Running in initrd. Mar 14 00:39:19.300379 systemd[1]: No hostname configured, using default hostname. Mar 14 00:39:19.300386 systemd[1]: Hostname set to . Mar 14 00:39:19.300394 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:39:19.300404 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:39:19.300412 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:39:19.300420 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:39:19.300428 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:39:19.300436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:39:19.300450 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:39:19.300458 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:39:19.300467 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:39:19.300475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:39:19.300483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:39:19.300490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:39:19.300498 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:39:19.300509 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:39:19.300517 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:39:19.300524 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:39:19.300532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:39:19.300540 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:39:19.300548 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:39:19.300555 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:39:19.300563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:39:19.300571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:39:19.300581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:39:19.300589 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:39:19.300597 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:39:19.300605 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:39:19.300616 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:39:19.300629 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:39:19.300642 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:39:19.300656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:39:19.300795 systemd-journald[194]: Collecting audit messages is disabled. Mar 14 00:39:19.300881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:39:19.300900 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:39:19.300914 systemd-journald[194]: Journal started Mar 14 00:39:19.300946 systemd-journald[194]: Runtime Journal (/run/log/journal/0830d2662fb74b94b3bbb5bffd8f7b06) is 6.0M, max 48.3M, 42.2M free. Mar 14 00:39:19.318488 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:39:19.319355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:39:19.322948 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:39:19.344525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:39:19.376397 systemd-modules-load[195]: Inserted module 'overlay' Mar 14 00:39:19.383468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:39:19.392357 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:39:19.411521 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:39:19.466898 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:39:19.489565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:39:19.502990 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:39:19.518072 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:39:19.533209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:39:19.551458 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:39:19.576182 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:39:19.586229 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 14 00:39:19.591080 kernel: Bridge firewalling registered Mar 14 00:39:19.588562 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:39:19.609237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:39:19.627100 dracut-cmdline[226]: dracut-dracut-053 Mar 14 00:39:19.627100 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:39:19.670416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:39:19.704146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:39:19.805158 systemd-resolved[260]: Positive Trust Anchors: Mar 14 00:39:19.805221 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:39:19.805264 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:39:19.877630 systemd-resolved[260]: Defaulting to hostname 'linux'. Mar 14 00:39:19.891690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:39:19.913450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:39:19.964011 kernel: SCSI subsystem initialized Mar 14 00:39:19.993943 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:39:20.032394 kernel: iscsi: registered transport (tcp) Mar 14 00:39:20.073530 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:39:20.073706 kernel: QLogic iSCSI HBA Driver Mar 14 00:39:20.269051 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:39:20.327309 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:39:20.423091 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:39:20.423154 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:39:20.423167 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:39:20.540993 kernel: raid6: avx2x4 gen() 22864 MB/s Mar 14 00:39:20.561975 kernel: raid6: avx2x2 gen() 18788 MB/s Mar 14 00:39:20.583575 kernel: raid6: avx2x1 gen() 12587 MB/s Mar 14 00:39:20.583655 kernel: raid6: using algorithm avx2x4 gen() 22864 MB/s Mar 14 00:39:20.607109 kernel: raid6: .... xor() 2492 MB/s, rmw enabled Mar 14 00:39:20.607202 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:39:20.655284 kernel: xor: automatically using best checksumming function avx Mar 14 00:39:21.401925 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:39:21.503509 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:39:21.524249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:39:21.595653 systemd-udevd[418]: Using default interface naming scheme 'v255'. Mar 14 00:39:21.609692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:39:21.635671 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:39:21.694114 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 14 00:39:21.789656 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:39:21.829361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:39:22.036274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:39:22.086072 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:39:22.142009 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:39:22.157174 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:39:22.174623 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:39:22.203090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:39:22.233812 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:39:22.260021 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:39:22.261144 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:39:22.290689 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:39:22.289626 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:39:22.318506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:39:22.365498 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:39:22.365540 kernel: GPT:9289727 != 19775487 Mar 14 00:39:22.365664 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:39:22.365689 kernel: GPT:9289727 != 19775487 Mar 14 00:39:22.365931 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:39:22.365963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:39:22.318947 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:39:22.387929 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:39:22.408420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:39:22.413424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:39:22.454175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:39:22.560996 kernel: libata version 3.00 loaded. Mar 14 00:39:22.571262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:39:22.649915 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:39:22.684621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:39:22.713042 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Mar 14 00:39:22.713149 kernel: AES CTR mode by8 optimization enabled Mar 14 00:39:22.755173 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (469) Mar 14 00:39:22.777479 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:39:22.843634 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:39:22.867976 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:39:22.868393 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:39:22.868431 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:39:22.868670 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:39:22.907023 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:39:22.923669 kernel: scsi host0: ahci Mar 14 00:39:22.924164 kernel: scsi host1: ahci Mar 14 00:39:22.931347 kernel: scsi host2: ahci Mar 14 00:39:22.938181 kernel: scsi host3: ahci Mar 14 00:39:22.951085 kernel: scsi host4: ahci Mar 14 00:39:22.964239 kernel: scsi host5: ahci Mar 14 00:39:22.964611 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 14 00:39:22.964702 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 14 00:39:22.973207 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 14 00:39:22.977827 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 14 00:39:22.978065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:39:23.029057 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 14 00:39:23.029094 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 14 00:39:23.030141 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:39:23.055092 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:39:23.056675 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:39:23.093627 disk-uuid[559]: Primary Header is updated. Mar 14 00:39:23.093627 disk-uuid[559]: Secondary Entries is updated. Mar 14 00:39:23.093627 disk-uuid[559]: Secondary Header is updated. Mar 14 00:39:23.113463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:39:23.134031 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:39:23.140374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:39:23.338110 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:39:23.338175 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:39:23.352189 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:39:23.352266 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:39:23.364151 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:39:23.387838 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:39:23.387953 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:39:23.391959 kernel: ata3.00: applying bridge limits Mar 14 00:39:23.398216 kernel: ata3.00: configured for UDMA/100 Mar 14 00:39:23.414928 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:39:23.509792 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:39:23.510339 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:39:23.528930 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:39:24.217640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:39:24.227042 disk-uuid[560]: The operation has completed successfully. Mar 14 00:39:24.740673 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:39:24.743603 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:39:24.821580 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:39:24.913790 sh[596]: Success Mar 14 00:39:25.006994 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:39:25.281399 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:39:25.385040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:39:25.441702 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:39:25.508173 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:39:25.508212 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:39:25.508231 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:39:25.508248 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:39:25.508264 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:39:25.552355 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:39:25.563227 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:39:25.620010 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:39:25.684155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:39:25.785124 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:39:25.785168 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:39:25.785206 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:39:25.809283 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:39:26.010982 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:39:26.015170 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:39:26.348454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:39:26.519571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:39:27.819226 ignition[672]: Ignition 2.19.0 Mar 14 00:39:27.819247 ignition[672]: Stage: fetch-offline Mar 14 00:39:27.819322 ignition[672]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:39:27.819342 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:39:27.819517 ignition[672]: parsed url from cmdline: "" Mar 14 00:39:27.819525 ignition[672]: no config URL provided Mar 14 00:39:27.819535 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:39:27.819552 ignition[672]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:39:27.819604 ignition[672]: op(1): [started] loading QEMU firmware config module Mar 14 00:39:27.819613 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:39:28.030329 ignition[672]: op(1): [finished] loading QEMU firmware config module Mar 14 00:39:28.873015 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:39:28.975539 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:39:29.159123 systemd-networkd[786]: lo: Link UP Mar 14 00:39:29.159404 systemd-networkd[786]: lo: Gained carrier Mar 14 00:39:29.231653 systemd-networkd[786]: Enumeration completed Mar 14 00:39:29.260491 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:39:29.305267 systemd[1]: Reached target network.target - Network. Mar 14 00:39:29.332343 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:39:29.607849 kernel: hrtimer: interrupt took 6876649 ns Mar 14 00:39:29.332353 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:39:29.634353 systemd-networkd[786]: eth0: Link UP Mar 14 00:39:29.634364 systemd-networkd[786]: eth0: Gained carrier Mar 14 00:39:29.634387 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:39:29.745674 ignition[672]: parsing config with SHA512: cb7b0600b5fa595e24d15633d9b8e38b731e53d7390e35dfc9e01881f02b6dd71c052ae381e2049109894a73c0bdd2409d71de1531d51fb60d38c7142cd6f708 Mar 14 00:39:29.773044 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:39:29.781618 unknown[672]: fetched base config from "system" Mar 14 00:39:29.788196 ignition[672]: fetch-offline: fetch-offline passed Mar 14 00:39:29.781631 unknown[672]: fetched user config from "qemu" Mar 14 00:39:29.802368 ignition[672]: Ignition finished successfully Mar 14 00:39:29.825533 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:39:29.868322 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:39:29.943080 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:39:30.137156 ignition[790]: Ignition 2.19.0 Mar 14 00:39:30.137169 ignition[790]: Stage: kargs Mar 14 00:39:30.137437 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:39:30.137455 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:39:30.144499 ignition[790]: kargs: kargs passed Mar 14 00:39:30.144592 ignition[790]: Ignition finished successfully Mar 14 00:39:30.166675 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:39:30.234399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:39:30.505429 ignition[798]: Ignition 2.19.0 Mar 14 00:39:30.505489 ignition[798]: Stage: disks Mar 14 00:39:30.519396 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:39:30.532429 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:39:30.534201 ignition[798]: disks: disks passed Mar 14 00:39:30.535670 ignition[798]: Ignition finished successfully Mar 14 00:39:30.563963 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:39:30.578327 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:39:30.607050 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:39:30.621221 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:39:30.630855 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:39:30.631117 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:39:30.836686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:39:30.843839 systemd-networkd[786]: eth0: Gained IPv6LL Mar 14 00:39:30.953789 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:39:30.989304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:39:31.041201 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:39:31.601230 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:39:31.604318 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:39:31.626367 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:39:31.659142 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:39:31.696990 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:39:31.707992 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:39:31.708065 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:39:31.708106 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:39:31.745232 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:39:31.783397 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:39:31.845328 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 14 00:39:31.858297 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:39:31.858372 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:39:31.875322 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:39:31.938109 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:39:31.945108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:39:32.029205 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:39:32.061823 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:39:32.104622 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:39:32.126227 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:39:32.489308 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:39:32.513590 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:39:32.533285 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:39:32.550806 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:39:32.565172 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:39:32.674812 ignition[931]: INFO : Ignition 2.19.0 Mar 14 00:39:32.674812 ignition[931]: INFO : Stage: mount Mar 14 00:39:32.674812 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:39:32.674812 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:39:32.725361 ignition[931]: INFO : mount: mount passed Mar 14 00:39:32.725361 ignition[931]: INFO : Ignition finished successfully Mar 14 00:39:32.699031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:39:32.707100 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:39:32.749075 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:39:32.802415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:39:32.835629 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Mar 14 00:39:32.851769 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:39:32.851844 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:39:32.851861 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:39:32.899178 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:39:32.904213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:39:32.985090 ignition[961]: INFO : Ignition 2.19.0 Mar 14 00:39:32.985090 ignition[961]: INFO : Stage: files Mar 14 00:39:33.007200 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:39:33.007200 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:39:33.030694 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:39:33.037485 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:39:33.045982 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:39:33.070167 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:39:33.078867 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:39:33.078867 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:39:33.078867 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:39:33.078867 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:39:33.072423 unknown[961]: wrote ssh authorized keys file for user: core Mar 14 00:39:33.179148 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:39:33.933811 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:39:33.933811 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:39:33.955538 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:39:34.381554 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:39:41.768391 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:39:41.813511 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:39:41.826411 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:39:41.861193 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:39:41.861193 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:39:41.861193 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:39:41.861193 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:39:41.861193 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:39:41.861193 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:39:41.861193 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:39:42.243887 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:39:42.322157 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:39:42.331654 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:39:42.331654 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:39:42.331654 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:39:42.331654 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:39:42.331654 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:39:42.331654 ignition[961]: INFO : files: files passed Mar 14 00:39:42.331654 ignition[961]: INFO : Ignition finished successfully Mar 14 00:39:42.338160 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:39:42.472437 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:39:42.527691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:39:42.560650 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:39:42.561016 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:39:42.675188 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:39:42.708124 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:39:42.708124 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:39:42.743048 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:39:42.773616 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:39:42.805233 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:39:42.872350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:39:43.045828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:39:43.047210 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:39:43.059503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:39:43.096300 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:39:43.108989 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:39:43.144050 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:39:43.360018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:39:43.424167 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:39:43.548276 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:39:43.575699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:39:43.598441 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:39:43.609189 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:39:43.609508 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:39:43.610114 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:39:43.610295 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:39:43.610438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:39:43.610565 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:39:43.610688 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:39:43.611196 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:39:43.690484 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:39:43.724318 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:39:43.811688 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:39:43.843420 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:39:43.859311 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:39:43.859620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:39:43.867634 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:39:43.878010 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:39:43.910507 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:39:43.911414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:39:43.926496 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:39:43.933055 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:39:43.994492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:39:43.994845 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:39:44.014479 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:39:44.025065 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:39:44.038510 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:39:44.044048 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:39:44.044300 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:39:44.044599 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:39:44.044879 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:39:44.113613 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:39:44.114288 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:39:44.126537 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:39:44.127593 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:39:44.152288 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:39:44.154128 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:39:44.199325 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:39:44.205885 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:39:44.206188 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:39:44.495072 ignition[1017]: INFO : Ignition 2.19.0 Mar 14 00:39:44.495072 ignition[1017]: INFO : Stage: umount Mar 14 00:39:44.219436 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:39:44.513202 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:39:44.513202 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:39:44.513202 ignition[1017]: INFO : umount: umount passed Mar 14 00:39:44.513202 ignition[1017]: INFO : Ignition finished successfully Mar 14 00:39:44.234397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:39:44.234819 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:39:44.255418 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:39:44.255672 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:39:44.276063 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:39:44.276617 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:39:44.438804 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:39:44.452486 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:39:44.453163 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:39:44.505893 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:39:44.506489 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:39:44.514302 systemd[1]: Stopped target network.target - Network. Mar 14 00:39:44.535563 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:39:44.535700 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:39:44.659585 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:39:44.659697 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:39:44.816909 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:39:44.817206 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:39:44.817508 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:39:44.817585 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:39:44.938182 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:39:44.951591 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:39:45.025813 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:39:45.046917 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 14 00:39:45.050546 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:39:45.098805 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:39:45.099238 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:39:45.141152 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:39:45.141266 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:39:45.305314 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:39:45.314010 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:39:45.314154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:39:45.314563 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:39:45.315537 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:39:45.315987 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:39:45.477532 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:39:45.485305 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:39:45.537563 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:39:45.539580 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:39:45.549604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:39:45.549806 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:39:45.609483 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:39:45.609691 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:39:45.771389 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:39:45.771541 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:39:45.778561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:39:45.778677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:39:45.855142 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:39:45.863294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:39:45.863701 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:39:45.887075 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:39:45.887269 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:39:45.887397 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:39:45.887464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:39:45.887543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:39:45.887599 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:39:45.887677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:39:45.887827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:39:45.889047 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:39:45.889283 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:39:45.898444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:39:45.898642 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:39:45.999399 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:39:46.158621 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:39:46.241185 systemd[1]: Switching root. Mar 14 00:39:46.353286 systemd-journald[194]: Journal stopped Mar 14 00:39:51.474680 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 14 00:39:51.474906 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:39:51.474942 kernel: SELinux: policy capability open_perms=1 Mar 14 00:39:51.475016 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:39:51.475038 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:39:51.475057 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:39:51.475076 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:39:51.475095 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:39:51.475112 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:39:51.475131 kernel: audit: type=1403 audit(1773448786.977:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:39:51.475156 systemd[1]: Successfully loaded SELinux policy in 138.501ms. Mar 14 00:39:51.475197 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 47.127ms. Mar 14 00:39:51.475219 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:39:51.475238 systemd[1]: Detected virtualization kvm. Mar 14 00:39:51.475257 systemd[1]: Detected architecture x86-64. Mar 14 00:39:51.475277 systemd[1]: Detected first boot. Mar 14 00:39:51.475294 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:39:51.475313 zram_generator::config[1062]: No configuration found. Mar 14 00:39:51.475340 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:39:51.475361 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:39:51.475382 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:39:51.475413 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:39:51.475435 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:39:51.475455 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:39:51.475473 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:39:51.475493 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:39:51.475518 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:39:51.475546 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:39:51.475566 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:39:51.475586 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:39:51.475605 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:39:51.475625 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:39:51.475644 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:39:51.475663 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:39:51.475682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:39:51.475706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:39:51.475817 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:39:51.475836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:39:51.475864 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:39:51.475884 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:39:51.475903 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:39:51.475921 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:39:51.475947 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:39:51.476017 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:39:51.476041 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:39:51.476059 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:39:51.476078 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:39:51.476099 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:39:51.476119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:39:51.476136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:39:51.476154 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:39:51.476172 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:39:51.476192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:39:51.476205 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:39:51.476225 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:39:51.476244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:39:51.476265 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:39:51.476284 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:39:51.476303 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:39:51.476324 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:39:51.476349 systemd[1]: Reached target machines.target - Containers. Mar 14 00:39:51.476367 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:39:51.476385 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:39:51.476405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:39:51.476426 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:39:51.476446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:39:51.476464 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:39:51.476482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:39:51.476501 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:39:51.476523 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:39:51.476541 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:39:51.476560 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:39:51.476579 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:39:51.476595 kernel: loop: module loaded Mar 14 00:39:51.476613 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:39:51.476630 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:39:51.476647 kernel: fuse: init (API version 7.39) Mar 14 00:39:51.476672 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:39:51.476692 kernel: ACPI: bus type drm_connector registered Mar 14 00:39:51.476794 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:39:51.476854 systemd-journald[1146]: Collecting audit messages is disabled. Mar 14 00:39:51.476896 systemd-journald[1146]: Journal started Mar 14 00:39:51.477037 systemd-journald[1146]: Runtime Journal (/run/log/journal/0830d2662fb74b94b3bbb5bffd8f7b06) is 6.0M, max 48.3M, 42.2M free. Mar 14 00:39:49.593190 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:39:49.695151 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:39:49.700422 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:39:49.702604 systemd[1]: systemd-journald.service: Consumed 2.332s CPU time. Mar 14 00:39:51.492881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:39:51.515073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:39:51.556662 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:39:51.579510 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:39:51.590219 systemd[1]: Stopped verity-setup.service. Mar 14 00:39:51.619440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:39:51.669409 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:39:51.695885 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:39:51.705688 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:39:51.731161 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:39:51.745409 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:39:51.774058 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:39:51.793644 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:39:51.815888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:39:51.851291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:39:51.867686 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:39:51.868318 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:39:51.875346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:39:51.875786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:39:51.896272 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:39:51.896794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:39:51.905622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:39:51.906590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:39:51.928387 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:39:51.929893 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:39:51.937604 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:39:51.938238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:39:51.945503 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:39:51.952342 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:39:51.959627 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:39:52.069158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:39:52.104039 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:39:52.153259 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:39:52.205659 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:39:52.266941 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:39:52.274311 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:39:52.311519 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:39:52.401250 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:39:52.449158 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:39:52.475465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:39:52.505572 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:39:52.594854 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:39:52.624196 systemd-journald[1146]: Time spent on flushing to /var/log/journal/0830d2662fb74b94b3bbb5bffd8f7b06 is 69.407ms for 978 entries. Mar 14 00:39:52.624196 systemd-journald[1146]: System Journal (/var/log/journal/0830d2662fb74b94b3bbb5bffd8f7b06) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:39:52.847133 systemd-journald[1146]: Received client request to flush runtime journal. Mar 14 00:39:52.620278 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:39:52.648264 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:39:52.692210 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:39:52.805061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:39:52.864350 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:39:52.913107 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:39:52.930258 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:39:52.959570 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:39:52.982928 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:39:53.017459 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:39:53.035946 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:39:53.073824 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:39:53.078579 kernel: loop0: detected capacity change from 0 to 142488 Mar 14 00:39:53.231600 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:39:53.306133 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:39:53.338259 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:39:53.610628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:39:53.609302 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:39:53.617704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:39:53.708203 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:39:53.744895 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:39:53.781953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:39:53.799879 kernel: loop1: detected capacity change from 0 to 140768 Mar 14 00:39:53.963934 kernel: loop2: detected capacity change from 0 to 217752 Mar 14 00:39:54.068041 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 14 00:39:54.068105 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 14 00:39:54.129448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:39:54.351603 kernel: loop3: detected capacity change from 0 to 142488 Mar 14 00:39:54.534159 kernel: loop4: detected capacity change from 0 to 140768 Mar 14 00:39:54.712873 kernel: loop5: detected capacity change from 0 to 217752 Mar 14 00:39:54.810830 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:39:54.812483 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 14 00:39:54.842596 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:39:54.842628 systemd[1]: Reloading... Mar 14 00:39:55.082206 zram_generator::config[1223]: No configuration found. Mar 14 00:39:56.310175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:39:56.321101 ldconfig[1172]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:39:56.470068 systemd[1]: Reloading finished in 1622 ms. Mar 14 00:39:56.571706 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:39:56.608342 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:39:56.658544 systemd[1]: Starting ensure-sysext.service... Mar 14 00:39:56.666537 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:39:56.681106 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:39:56.722594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:39:56.744280 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:39:56.744354 systemd[1]: Reloading... Mar 14 00:39:56.774633 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:39:56.775557 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:39:56.777809 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:39:56.778515 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Mar 14 00:39:56.778701 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Mar 14 00:39:56.806820 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:39:56.806838 systemd-tmpfiles[1265]: Skipping /boot Mar 14 00:39:56.825324 systemd-udevd[1267]: Using default interface naming scheme 'v255'. Mar 14 00:39:56.947308 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:39:56.947335 systemd-tmpfiles[1265]: Skipping /boot Mar 14 00:39:56.991807 zram_generator::config[1293]: No configuration found. Mar 14 00:39:57.550642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:39:57.741847 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1329) Mar 14 00:39:57.770065 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:39:57.770239 systemd[1]: Reloading finished in 1025 ms. Mar 14 00:39:57.813166 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:39:57.835961 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:39:57.927922 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:39:57.932639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:39:57.949237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:39:57.964956 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:39:57.965114 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:39:57.970449 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:39:58.003565 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 14 00:39:58.005461 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:39:58.018088 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:39:58.060622 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:39:58.048142 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:39:58.059831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:39:58.066277 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:39:58.098481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:39:58.117961 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:39:58.129972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:39:58.140128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:39:58.168417 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:39:58.845894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:39:58.982966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:39:59.273570 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:39:59.427196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:39:59.466938 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:39:59.517563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:39:59.517937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:39:59.528357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:39:59.529120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:39:59.537909 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:39:59.538522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:39:59.629687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:39:59.665466 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:39:59.776846 augenrules[1387]: No rules Mar 14 00:39:59.791211 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:40:00.357420 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:40:00.450420 systemd[1]: Finished ensure-sysext.service. Mar 14 00:40:00.520382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:40:00.520935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:40:00.538267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:40:00.556092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:40:00.569131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:40:00.586304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:40:00.619085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:40:00.635950 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:40:00.649086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:40:00.716203 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:40:00.808602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:40:00.829685 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:40:00.830228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:40:00.834620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:40:00.835225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:40:00.850147 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:40:00.851579 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:40:00.862441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:40:00.864179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:40:00.865105 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:40:00.865385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:40:00.878153 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:40:00.905304 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:40:00.909501 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:40:01.290842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:40:01.355572 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:40:01.507824 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:40:01.701808 systemd-networkd[1379]: lo: Link UP Mar 14 00:40:01.702570 systemd-networkd[1379]: lo: Gained carrier Mar 14 00:40:01.713399 systemd-networkd[1379]: Enumeration completed Mar 14 00:40:01.713645 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:40:01.739391 kernel: kvm_amd: TSC scaling supported Mar 14 00:40:01.739514 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:40:01.739551 kernel: kvm_amd: Nested Paging enabled Mar 14 00:40:01.724073 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:40:01.724089 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:40:01.729503 systemd-networkd[1379]: eth0: Link UP Mar 14 00:40:01.729511 systemd-networkd[1379]: eth0: Gained carrier Mar 14 00:40:01.729536 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:40:01.754622 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:40:01.754948 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:40:01.777656 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:40:01.808085 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:40:01.832407 systemd-resolved[1382]: Positive Trust Anchors: Mar 14 00:40:01.832491 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:40:01.832534 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:40:01.869478 systemd-resolved[1382]: Defaulting to hostname 'linux'. Mar 14 00:40:01.894413 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:40:02.615661 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:40:02.659762 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:40:02.662232 systemd-timesyncd[1405]: Initial clock synchronization to Sat 2026-03-14 00:40:02.587040 UTC. Mar 14 00:40:02.671721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:40:02.701360 systemd[1]: Reached target network.target - Network. Mar 14 00:40:02.710263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:40:03.532145 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:40:03.615532 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:40:03.684630 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:40:03.775295 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:40:03.874464 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:40:03.899277 systemd-networkd[1379]: eth0: Gained IPv6LL Mar 14 00:40:03.903627 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:40:03.916323 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:40:03.984257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:40:03.993004 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:40:04.002751 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:40:04.015189 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:40:04.033525 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:40:04.045728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:40:04.045776 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:40:04.067086 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:40:04.085177 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:40:04.366793 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:40:04.487399 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:40:04.529745 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:40:04.570161 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:40:04.591634 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:40:04.597567 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:40:04.607485 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:40:04.613332 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:40:04.642184 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:40:04.642369 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:40:04.701237 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:40:04.715775 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:40:04.733903 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:40:04.748207 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:40:04.793767 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:40:04.828618 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:40:04.864695 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:40:04.871932 jq[1439]: false Mar 14 00:40:04.887264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:40:04.912181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:40:04.924431 extend-filesystems[1440]: Found loop3 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found loop4 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found loop5 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found sr0 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found vda Mar 14 00:40:04.924431 extend-filesystems[1440]: Found vda1 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found vda2 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found vda3 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found usr Mar 14 00:40:04.924431 extend-filesystems[1440]: Found vda4 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found vda6 Mar 14 00:40:04.924431 extend-filesystems[1440]: Found vda7 Mar 14 00:40:05.269017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1345) Mar 14 00:40:05.269164 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:40:04.975193 dbus-daemon[1438]: [system] SELinux support is enabled Mar 14 00:40:05.045457 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:40:05.273771 extend-filesystems[1440]: Found vda9 Mar 14 00:40:05.273771 extend-filesystems[1440]: Checking size of /dev/vda9 Mar 14 00:40:05.273771 extend-filesystems[1440]: Resized partition /dev/vda9 Mar 14 00:40:05.204971 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:40:05.303554 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:40:05.270410 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:40:05.283582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:40:05.297619 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:40:05.394568 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:40:05.404229 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:40:05.432767 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:40:05.503056 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:40:05.512405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:40:05.528721 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:40:05.533025 jq[1468]: true Mar 14 00:40:05.540224 update_engine[1465]: I20260314 00:40:05.540064 1465 main.cc:92] Flatcar Update Engine starting Mar 14 00:40:05.543268 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:40:05.611535 update_engine[1465]: I20260314 00:40:05.543041 1465 update_check_scheduler.cc:74] Next update check in 6m8s Mar 14 00:40:05.591094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:40:05.595464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:40:05.598199 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:40:05.603518 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:40:05.616320 systemd-logind[1459]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:40:05.616419 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:40:05.619341 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:40:05.619748 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:40:05.619748 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:40:05.619748 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:40:05.713067 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Mar 14 00:40:05.621973 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:40:05.622469 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:40:05.622549 systemd-logind[1459]: New seat seat0. Mar 14 00:40:05.651735 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:40:05.726407 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:40:05.726944 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:40:05.834301 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:40:05.834677 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:40:05.863657 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:40:05.915086 jq[1475]: true Mar 14 00:40:05.919474 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:40:06.042277 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:40:06.059622 tar[1474]: linux-amd64/LICENSE Mar 14 00:40:06.061269 tar[1474]: linux-amd64/helm Mar 14 00:40:06.104426 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:40:06.281681 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:40:06.337445 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:40:06.345599 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:40:06.346390 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:40:06.346614 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:40:06.366414 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:40:06.366640 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:40:06.427339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:40:06.510681 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:40:06.511500 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:40:06.531435 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:40:06.794779 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:40:06.797799 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:40:06.828998 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:40:07.071012 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:40:07.122348 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:40:07.135216 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:40:07.228982 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:40:07.236037 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:40:09.711440 containerd[1476]: time="2026-03-14T00:40:09.709451153Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:40:09.804542 containerd[1476]: time="2026-03-14T00:40:09.804468475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:40:09.822355 containerd[1476]: time="2026-03-14T00:40:09.822180539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.823068547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.823226342Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.823787399Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.823960963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.824233111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.824258137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.824542779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.824563117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.824580810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.824594987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:40:09.824996 containerd[1476]: time="2026-03-14T00:40:09.824717836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:40:09.826005 containerd[1476]: time="2026-03-14T00:40:09.825980413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:40:09.826426 containerd[1476]: time="2026-03-14T00:40:09.826400237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:40:09.826504 containerd[1476]: time="2026-03-14T00:40:09.826481419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:40:09.826756 containerd[1476]: time="2026-03-14T00:40:09.826727698Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:40:09.827015 containerd[1476]: time="2026-03-14T00:40:09.826992533Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:40:09.886004 containerd[1476]: time="2026-03-14T00:40:09.883792856Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:40:09.896926 containerd[1476]: time="2026-03-14T00:40:09.895232147Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:40:09.896926 containerd[1476]: time="2026-03-14T00:40:09.896913826Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:40:09.897079 containerd[1476]: time="2026-03-14T00:40:09.896947679Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:40:09.897079 containerd[1476]: time="2026-03-14T00:40:09.897013692Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:40:09.907344 containerd[1476]: time="2026-03-14T00:40:09.906517486Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910026126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910750228Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910776247Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910801854Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910934071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910951614Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910967574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.910985577Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.911008299Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.911028658Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.911183046Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.911212702Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.911289705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914406 containerd[1476]: time="2026-03-14T00:40:09.911397877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.911429436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.911445887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.911601778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.911661469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.911678571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.911700252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.912645447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.912697964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.912721168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.912748548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.912935217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.912963530Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.913243092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.913280232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.914920 containerd[1476]: time="2026-03-14T00:40:09.913299117Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913490283Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913529837Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913545296Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913560344Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913573980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913655943Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913676532Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:40:09.915373 containerd[1476]: time="2026-03-14T00:40:09.913690016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:40:09.915615 containerd[1476]: time="2026-03-14T00:40:09.914579146Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:40:09.915615 containerd[1476]: time="2026-03-14T00:40:09.914919252Z" level=info msg="Connect containerd service" Mar 14 00:40:09.915615 containerd[1476]: time="2026-03-14T00:40:09.914974725Z" level=info msg="using legacy CRI server" Mar 14 00:40:09.915615 containerd[1476]: time="2026-03-14T00:40:09.914985154Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:40:09.915615 containerd[1476]: time="2026-03-14T00:40:09.915315291Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:40:09.918016 containerd[1476]: time="2026-03-14T00:40:09.917460015Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:40:09.919374 containerd[1476]: time="2026-03-14T00:40:09.918074961Z" level=info msg="Start subscribing containerd event" Mar 14 00:40:09.919374 containerd[1476]: time="2026-03-14T00:40:09.918772816Z" level=info msg="Start recovering state" Mar 14 00:40:09.919374 containerd[1476]: time="2026-03-14T00:40:09.918958643Z" level=info msg="Start event monitor" Mar 14 00:40:09.919374 containerd[1476]: time="2026-03-14T00:40:09.918975644Z" level=info msg="Start snapshots syncer" Mar 14 00:40:09.919374 containerd[1476]: time="2026-03-14T00:40:09.918991785Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:40:09.919374 containerd[1476]: time="2026-03-14T00:40:09.919004599Z" level=info msg="Start streaming server" Mar 14 00:40:09.920452 containerd[1476]: time="2026-03-14T00:40:09.918422482Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:40:09.920508 containerd[1476]: time="2026-03-14T00:40:09.920457070Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:40:09.920913 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:40:09.934000 containerd[1476]: time="2026-03-14T00:40:09.921395973Z" level=info msg="containerd successfully booted in 0.215912s" Mar 14 00:40:10.300373 tar[1474]: linux-amd64/README.md Mar 14 00:40:10.438519 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:40:12.192933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:40:12.209585 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:40:12.213511 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:40:12.221547 systemd[1]: Startup finished in 5.167s (kernel) + 28.625s (initrd) + 24.703s (userspace) = 58.496s. Mar 14 00:40:12.937365 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:40:12.948969 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:53348.service - OpenSSH per-connection server daemon (10.0.0.1:53348). Mar 14 00:40:13.092061 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 53348 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:40:13.097080 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:13.123706 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:40:13.144904 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:40:13.152098 systemd-logind[1459]: New session 1 of user core. Mar 14 00:40:13.171557 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:40:13.194346 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:40:13.211073 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:40:13.265802 kubelet[1548]: E0314 00:40:13.265683 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:40:13.294539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:40:13.294752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:40:13.295432 systemd[1]: kubelet.service: Consumed 4.247s CPU time. Mar 14 00:40:13.425436 systemd[1564]: Queued start job for default target default.target. Mar 14 00:40:13.440090 systemd[1564]: Created slice app.slice - User Application Slice. Mar 14 00:40:13.440212 systemd[1564]: Reached target paths.target - Paths. Mar 14 00:40:13.440228 systemd[1564]: Reached target timers.target - Timers. Mar 14 00:40:13.443092 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:40:13.468680 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:40:13.469542 systemd[1564]: Reached target sockets.target - Sockets. Mar 14 00:40:13.469623 systemd[1564]: Reached target basic.target - Basic System. Mar 14 00:40:13.470376 systemd[1564]: Reached target default.target - Main User Target. Mar 14 00:40:13.470452 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:40:13.470643 systemd[1564]: Startup finished in 244ms. Mar 14 00:40:13.473797 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:40:13.552374 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Mar 14 00:40:13.616495 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:40:13.619382 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:13.633933 systemd-logind[1459]: New session 2 of user core. Mar 14 00:40:13.652411 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:40:13.727074 sshd[1577]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:13.742625 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:53362.service: Deactivated successfully. Mar 14 00:40:13.745625 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:40:13.753218 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:40:13.762674 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:53378.service - OpenSSH per-connection server daemon (10.0.0.1:53378). Mar 14 00:40:13.773608 systemd-logind[1459]: Removed session 2. Mar 14 00:40:13.814896 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 53378 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:40:13.817488 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:13.837919 systemd-logind[1459]: New session 3 of user core. Mar 14 00:40:13.846611 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:40:13.914652 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:13.926991 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:53378.service: Deactivated successfully. Mar 14 00:40:13.929745 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:40:13.934948 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:40:13.953309 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:53384.service - OpenSSH per-connection server daemon (10.0.0.1:53384). Mar 14 00:40:13.959326 systemd-logind[1459]: Removed session 3. Mar 14 00:40:14.035115 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 53384 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:40:14.038543 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:14.052772 systemd-logind[1459]: New session 4 of user core. Mar 14 00:40:14.066752 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:40:14.160262 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:14.176348 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:53384.service: Deactivated successfully. Mar 14 00:40:14.179573 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:40:14.183232 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:40:14.207699 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:53388.service - OpenSSH per-connection server daemon (10.0.0.1:53388). Mar 14 00:40:14.211715 systemd-logind[1459]: Removed session 4. Mar 14 00:40:14.284261 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 53388 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:40:14.287312 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:14.303214 systemd-logind[1459]: New session 5 of user core. Mar 14 00:40:14.310452 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:40:14.440348 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:40:14.441409 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:40:14.470405 sudo[1602]: pam_unix(sudo:session): session closed for user root Mar 14 00:40:14.474324 sshd[1599]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:14.491542 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:53388.service: Deactivated successfully. Mar 14 00:40:14.494582 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:40:14.497793 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:40:14.514795 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:53394.service - OpenSSH per-connection server daemon (10.0.0.1:53394). Mar 14 00:40:14.520316 systemd-logind[1459]: Removed session 5. Mar 14 00:40:14.575343 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 53394 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:40:14.581738 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:14.592796 systemd-logind[1459]: New session 6 of user core. Mar 14 00:40:14.607764 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:40:14.684568 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:40:14.685323 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:40:14.694735 sudo[1612]: pam_unix(sudo:session): session closed for user root Mar 14 00:40:14.706523 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:40:14.707267 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:40:14.742585 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:40:14.750584 auditctl[1615]: No rules Mar 14 00:40:14.753043 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:40:14.753639 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:40:14.758718 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:40:14.867234 augenrules[1633]: No rules Mar 14 00:40:14.869036 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:40:14.872766 sudo[1611]: pam_unix(sudo:session): session closed for user root Mar 14 00:40:14.880973 sshd[1607]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:14.902329 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:53394.service: Deactivated successfully. Mar 14 00:40:14.905709 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:40:14.911779 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:40:14.928500 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:53404.service - OpenSSH per-connection server daemon (10.0.0.1:53404). Mar 14 00:40:14.932734 systemd-logind[1459]: Removed session 6. Mar 14 00:40:15.012631 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 53404 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:40:15.018041 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:15.038343 systemd-logind[1459]: New session 7 of user core. Mar 14 00:40:15.049525 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:40:15.130732 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:40:15.131539 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:40:15.818605 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:40:15.824217 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:40:16.520977 dockerd[1664]: time="2026-03-14T00:40:16.519672731Z" level=info msg="Starting up" Mar 14 00:40:16.917931 dockerd[1664]: time="2026-03-14T00:40:16.915529049Z" level=info msg="Loading containers: start." Mar 14 00:40:17.326610 kernel: Initializing XFRM netlink socket Mar 14 00:40:17.688614 systemd-networkd[1379]: docker0: Link UP Mar 14 00:40:17.774435 dockerd[1664]: time="2026-03-14T00:40:17.774120456Z" level=info msg="Loading containers: done." Mar 14 00:40:17.807008 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3170641110-merged.mount: Deactivated successfully. Mar 14 00:40:17.813797 dockerd[1664]: time="2026-03-14T00:40:17.813403495Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:40:17.814136 dockerd[1664]: time="2026-03-14T00:40:17.814046217Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:40:17.814797 dockerd[1664]: time="2026-03-14T00:40:17.814454960Z" level=info msg="Daemon has completed initialization" Mar 14 00:40:17.927982 dockerd[1664]: time="2026-03-14T00:40:17.927665337Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:40:17.929312 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:40:19.117136 containerd[1476]: time="2026-03-14T00:40:19.116915352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:40:19.914762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089610681.mount: Deactivated successfully. Mar 14 00:40:24.136002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:40:24.604458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:40:34.143514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:40:34.192317 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:40:35.411025 kubelet[1876]: E0314 00:40:35.394316 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:40:35.535175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:40:35.536048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:40:35.538381 systemd[1]: kubelet.service: Consumed 8.388s CPU time. Mar 14 00:40:36.408093 containerd[1476]: time="2026-03-14T00:40:36.405786219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:36.419076 containerd[1476]: time="2026-03-14T00:40:36.409930616Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 14 00:40:36.419076 containerd[1476]: time="2026-03-14T00:40:36.414694380Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:36.431380 containerd[1476]: time="2026-03-14T00:40:36.430765921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:36.434305 containerd[1476]: time="2026-03-14T00:40:36.434108526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 17.317136241s" Mar 14 00:40:36.434418 containerd[1476]: time="2026-03-14T00:40:36.434386097Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:40:36.437568 containerd[1476]: time="2026-03-14T00:40:36.437495394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:40:45.631006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:40:45.711707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:40:45.826615 containerd[1476]: time="2026-03-14T00:40:45.812343734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:45.884726 containerd[1476]: time="2026-03-14T00:40:45.884276993Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 14 00:40:45.896078 containerd[1476]: time="2026-03-14T00:40:45.894314427Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:45.908659 containerd[1476]: time="2026-03-14T00:40:45.908601949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:45.912377 containerd[1476]: time="2026-03-14T00:40:45.912232964Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 9.474664893s" Mar 14 00:40:45.912377 containerd[1476]: time="2026-03-14T00:40:45.912367844Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:40:45.914977 containerd[1476]: time="2026-03-14T00:40:45.914567255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:40:48.877233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:40:48.888224 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:40:50.005986 kubelet[1898]: E0314 00:40:50.002407 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:40:50.035730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:40:50.036192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:40:50.037057 systemd[1]: kubelet.service: Consumed 2.974s CPU time. Mar 14 00:40:50.785003 update_engine[1465]: I20260314 00:40:50.777073 1465 update_attempter.cc:509] Updating boot flags... Mar 14 00:40:51.026979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1918) Mar 14 00:40:51.925730 containerd[1476]: time="2026-03-14T00:40:51.925353847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:51.928090 containerd[1476]: time="2026-03-14T00:40:51.927919776Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 14 00:40:51.930751 containerd[1476]: time="2026-03-14T00:40:51.930687770Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:51.937778 containerd[1476]: time="2026-03-14T00:40:51.937679906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:51.941228 containerd[1476]: time="2026-03-14T00:40:51.939460232Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 6.024853498s" Mar 14 00:40:51.941228 containerd[1476]: time="2026-03-14T00:40:51.939557653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:40:51.941948 containerd[1476]: time="2026-03-14T00:40:51.941691451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:40:54.136357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713532524.mount: Deactivated successfully. Mar 14 00:40:54.713211 containerd[1476]: time="2026-03-14T00:40:54.713085283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:54.716949 containerd[1476]: time="2026-03-14T00:40:54.714978013Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 14 00:40:54.718163 containerd[1476]: time="2026-03-14T00:40:54.718048165Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:54.722779 containerd[1476]: time="2026-03-14T00:40:54.722497177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:54.725038 containerd[1476]: time="2026-03-14T00:40:54.723439020Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 2.781707406s" Mar 14 00:40:54.725038 containerd[1476]: time="2026-03-14T00:40:54.723527206Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:40:54.725038 containerd[1476]: time="2026-03-14T00:40:54.724590029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:40:55.252520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115076526.mount: Deactivated successfully. Mar 14 00:40:57.277278 containerd[1476]: time="2026-03-14T00:40:57.277091181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:57.279013 containerd[1476]: time="2026-03-14T00:40:57.278914214Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 14 00:40:57.281296 containerd[1476]: time="2026-03-14T00:40:57.281181733Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:57.287216 containerd[1476]: time="2026-03-14T00:40:57.287109316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:57.288966 containerd[1476]: time="2026-03-14T00:40:57.288652794Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.564036097s" Mar 14 00:40:57.288966 containerd[1476]: time="2026-03-14T00:40:57.288880303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:40:57.289886 containerd[1476]: time="2026-03-14T00:40:57.289637404Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:40:57.872042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648688383.mount: Deactivated successfully. Mar 14 00:40:57.896307 containerd[1476]: time="2026-03-14T00:40:57.895783648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:57.900780 containerd[1476]: time="2026-03-14T00:40:57.899001640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:40:57.900780 containerd[1476]: time="2026-03-14T00:40:57.900734696Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:57.910507 containerd[1476]: time="2026-03-14T00:40:57.909035876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:40:57.910507 containerd[1476]: time="2026-03-14T00:40:57.910172628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 620.426439ms" Mar 14 00:40:57.910507 containerd[1476]: time="2026-03-14T00:40:57.910213651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:40:57.912794 containerd[1476]: time="2026-03-14T00:40:57.912611425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:40:58.568918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48520689.mount: Deactivated successfully. Mar 14 00:41:00.089421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:41:00.102635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:41:00.402706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:41:00.425802 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:41:00.532989 kubelet[2048]: E0314 00:41:00.532713 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:41:00.539138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:41:00.539440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:41:00.912640 containerd[1476]: time="2026-03-14T00:41:00.912330274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:41:00.914629 containerd[1476]: time="2026-03-14T00:41:00.914502902Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 14 00:41:00.917044 containerd[1476]: time="2026-03-14T00:41:00.916996259Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:41:00.924736 containerd[1476]: time="2026-03-14T00:41:00.924323770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:41:00.930003 containerd[1476]: time="2026-03-14T00:41:00.926329504Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 3.013557491s" Mar 14 00:41:00.930003 containerd[1476]: time="2026-03-14T00:41:00.926377861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:41:03.295446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:41:03.312664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:41:03.381307 systemd[1]: Reloading requested from client PID 2102 ('systemctl') (unit session-7.scope)... Mar 14 00:41:03.381487 systemd[1]: Reloading... Mar 14 00:41:03.566994 zram_generator::config[2140]: No configuration found. Mar 14 00:41:03.881569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:41:04.009984 systemd[1]: Reloading finished in 626 ms. Mar 14 00:41:04.128929 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:41:04.129089 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:41:04.129669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:41:04.157962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:41:05.033630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:41:05.078638 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:41:05.265193 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:41:05.507103 kubelet[2189]: I0314 00:41:05.506975 2189 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:41:05.507103 kubelet[2189]: I0314 00:41:05.507085 2189 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:41:05.507103 kubelet[2189]: I0314 00:41:05.507114 2189 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:41:05.507397 kubelet[2189]: I0314 00:41:05.507123 2189 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:41:05.508768 kubelet[2189]: I0314 00:41:05.507545 2189 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:41:05.755486 kubelet[2189]: E0314 00:41:05.754291 2189 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:41:05.764583 kubelet[2189]: I0314 00:41:05.763432 2189 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:41:05.777935 kubelet[2189]: E0314 00:41:05.777294 2189 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:41:05.777935 kubelet[2189]: I0314 00:41:05.777383 2189 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:41:05.800938 kubelet[2189]: I0314 00:41:05.800668 2189 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:41:05.803202 kubelet[2189]: I0314 00:41:05.802745 2189 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:41:05.803986 kubelet[2189]: I0314 00:41:05.803102 2189 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:41:05.803986 kubelet[2189]: I0314 00:41:05.803707 2189 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:41:05.803986 kubelet[2189]: I0314 00:41:05.803729 2189 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:41:05.805420 kubelet[2189]: I0314 00:41:05.805169 2189 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:41:05.811755 kubelet[2189]: I0314 00:41:05.811343 2189 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:41:05.811967 kubelet[2189]: I0314 00:41:05.811928 2189 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:41:05.811967 kubelet[2189]: I0314 00:41:05.811948 2189 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:41:05.812053 kubelet[2189]: I0314 00:41:05.811990 2189 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:41:05.812053 kubelet[2189]: I0314 00:41:05.812004 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:41:05.820360 kubelet[2189]: I0314 00:41:05.820208 2189 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:41:05.824703 kubelet[2189]: I0314 00:41:05.824513 2189 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:41:05.824703 kubelet[2189]: I0314 00:41:05.824594 2189 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:41:05.824957 kubelet[2189]: W0314 00:41:05.824723 2189 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:41:05.832615 kubelet[2189]: I0314 00:41:05.831778 2189 server.go:1257] "Started kubelet" Mar 14 00:41:05.835014 kubelet[2189]: I0314 00:41:05.834696 2189 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:41:05.840644 kubelet[2189]: I0314 00:41:05.840557 2189 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:41:05.840644 kubelet[2189]: I0314 00:41:05.840605 2189 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:41:05.844441 kubelet[2189]: I0314 00:41:05.844356 2189 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:41:05.850923 kubelet[2189]: I0314 00:41:05.848587 2189 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:41:05.850923 kubelet[2189]: I0314 00:41:05.848748 2189 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:41:05.850923 kubelet[2189]: I0314 00:41:05.848957 2189 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:41:05.850923 kubelet[2189]: E0314 00:41:05.849298 2189 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:41:05.850923 kubelet[2189]: I0314 00:41:05.850080 2189 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:41:05.854018 kubelet[2189]: I0314 00:41:05.853714 2189 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:41:05.855059 kubelet[2189]: I0314 00:41:05.854994 2189 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:41:05.857436 kubelet[2189]: E0314 00:41:05.857397 2189 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Mar 14 00:41:05.861441 kubelet[2189]: I0314 00:41:05.861108 2189 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:41:05.863865 kubelet[2189]: E0314 00:41:05.861184 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8e58eb3892c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:41:05.831752384 +0000 UTC m=+0.733678947,LastTimestamp:2026-03-14 00:41:05.831752384 +0000 UTC m=+0.733678947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:41:05.868385 kubelet[2189]: I0314 00:41:05.868282 2189 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:41:05.868598 kubelet[2189]: I0314 00:41:05.868396 2189 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:41:05.869630 kubelet[2189]: E0314 00:41:05.869333 2189 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:41:05.871547 kubelet[2189]: I0314 00:41:05.871324 2189 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:41:05.903115 kubelet[2189]: I0314 00:41:05.902762 2189 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:41:05.903115 kubelet[2189]: I0314 00:41:05.902788 2189 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:41:05.903115 kubelet[2189]: I0314 00:41:05.902909 2189 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:41:05.911954 kubelet[2189]: I0314 00:41:05.911401 2189 policy_none.go:50] "Start" Mar 14 00:41:05.911954 kubelet[2189]: I0314 00:41:05.911464 2189 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:41:05.911954 kubelet[2189]: I0314 00:41:05.911478 2189 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:41:05.918410 kubelet[2189]: I0314 00:41:05.918114 2189 policy_none.go:44] "Start" Mar 14 00:41:05.935685 kubelet[2189]: I0314 00:41:05.935509 2189 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:41:05.935685 kubelet[2189]: I0314 00:41:05.935658 2189 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:41:05.935957 kubelet[2189]: I0314 00:41:05.935701 2189 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:41:05.935957 kubelet[2189]: E0314 00:41:05.935783 2189 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:41:05.951146 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:41:05.951500 kubelet[2189]: E0314 00:41:05.951168 2189 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:41:05.989310 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:41:06.021494 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:41:06.029744 kubelet[2189]: E0314 00:41:06.029419 2189 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:41:06.029976 kubelet[2189]: I0314 00:41:06.029785 2189 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:41:06.029976 kubelet[2189]: I0314 00:41:06.029801 2189 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:41:06.032398 kubelet[2189]: I0314 00:41:06.032061 2189 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:41:06.035792 kubelet[2189]: E0314 00:41:06.035761 2189 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:41:06.036304 kubelet[2189]: E0314 00:41:06.036278 2189 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:41:06.057446 kubelet[2189]: I0314 00:41:06.055655 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:41:06.057446 kubelet[2189]: I0314 00:41:06.056003 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c90966eba7fbe4f73b576151c42a06e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c90966eba7fbe4f73b576151c42a06e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:06.057446 kubelet[2189]: I0314 00:41:06.056040 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c90966eba7fbe4f73b576151c42a06e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c90966eba7fbe4f73b576151c42a06e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:06.057446 kubelet[2189]: I0314 00:41:06.056065 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c90966eba7fbe4f73b576151c42a06e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c90966eba7fbe4f73b576151c42a06e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:06.057446 kubelet[2189]: I0314 00:41:06.056340 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:06.057722 kubelet[2189]: I0314 00:41:06.056431 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:06.057722 kubelet[2189]: E0314 00:41:06.056541 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8e58eb3892c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:41:05.831752384 +0000 UTC m=+0.733678947,LastTimestamp:2026-03-14 00:41:05.831752384 +0000 UTC m=+0.733678947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:41:06.057722 kubelet[2189]: I0314 00:41:06.056769 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:06.057722 kubelet[2189]: I0314 00:41:06.057058 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:06.058303 kubelet[2189]: I0314 00:41:06.057118 2189 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:06.059350 kubelet[2189]: E0314 00:41:06.059297 2189 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Mar 14 00:41:06.067912 systemd[1]: Created slice kubepods-burstable-pod7c90966eba7fbe4f73b576151c42a06e.slice - libcontainer container kubepods-burstable-pod7c90966eba7fbe4f73b576151c42a06e.slice. Mar 14 00:41:06.089095 kubelet[2189]: E0314 00:41:06.088974 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:06.096654 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 14 00:41:06.102971 kubelet[2189]: E0314 00:41:06.102719 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:06.106301 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 14 00:41:06.110037 kubelet[2189]: E0314 00:41:06.109944 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:06.137454 kubelet[2189]: I0314 00:41:06.137201 2189 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:41:06.138277 kubelet[2189]: E0314 00:41:06.138037 2189 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Mar 14 00:41:06.341325 kubelet[2189]: I0314 00:41:06.340798 2189 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:41:06.342262 kubelet[2189]: E0314 00:41:06.341786 2189 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Mar 14 00:41:06.396388 kubelet[2189]: E0314 00:41:06.396149 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:06.399443 containerd[1476]: time="2026-03-14T00:41:06.399038435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c90966eba7fbe4f73b576151c42a06e,Namespace:kube-system,Attempt:0,}" Mar 14 00:41:06.408957 kubelet[2189]: E0314 00:41:06.408591 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:06.410493 containerd[1476]: time="2026-03-14T00:41:06.410359830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 14 00:41:06.415917 kubelet[2189]: E0314 00:41:06.415679 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:06.417002 containerd[1476]: time="2026-03-14T00:41:06.416644544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 14 00:41:06.461902 kubelet[2189]: E0314 00:41:06.461301 2189 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Mar 14 00:41:06.746468 kubelet[2189]: I0314 00:41:06.745933 2189 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:41:06.746591 kubelet[2189]: E0314 00:41:06.746469 2189 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Mar 14 00:41:06.895581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123808500.mount: Deactivated successfully. Mar 14 00:41:06.917692 containerd[1476]: time="2026-03-14T00:41:06.917385849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:41:06.932596 containerd[1476]: time="2026-03-14T00:41:06.931948048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:41:06.937630 containerd[1476]: time="2026-03-14T00:41:06.937491014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:41:06.942316 containerd[1476]: time="2026-03-14T00:41:06.941622492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:41:06.945228 containerd[1476]: time="2026-03-14T00:41:06.944248822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:41:06.949137 containerd[1476]: time="2026-03-14T00:41:06.948692280Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:41:06.952726 containerd[1476]: time="2026-03-14T00:41:06.952436309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:41:06.957105 containerd[1476]: time="2026-03-14T00:41:06.957028107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:41:06.959307 containerd[1476]: time="2026-03-14T00:41:06.959091295Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.778609ms" Mar 14 00:41:06.966103 containerd[1476]: time="2026-03-14T00:41:06.965462115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.682203ms" Mar 14 00:41:06.973013 containerd[1476]: time="2026-03-14T00:41:06.972771207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.257336ms" Mar 14 00:41:07.234299 containerd[1476]: time="2026-03-14T00:41:07.232456237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:07.234299 containerd[1476]: time="2026-03-14T00:41:07.232783645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:07.234299 containerd[1476]: time="2026-03-14T00:41:07.232953777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:07.234564 containerd[1476]: time="2026-03-14T00:41:07.233728924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:07.237716 containerd[1476]: time="2026-03-14T00:41:07.237098909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:07.240965 containerd[1476]: time="2026-03-14T00:41:07.240027677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:07.242918 containerd[1476]: time="2026-03-14T00:41:07.242636339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:07.245757 containerd[1476]: time="2026-03-14T00:41:07.243240123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:07.254924 containerd[1476]: time="2026-03-14T00:41:07.249013479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:07.254924 containerd[1476]: time="2026-03-14T00:41:07.249088206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:07.254924 containerd[1476]: time="2026-03-14T00:41:07.249102452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:07.254924 containerd[1476]: time="2026-03-14T00:41:07.249292380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:07.262747 kubelet[2189]: E0314 00:41:07.262595 2189 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="1.6s" Mar 14 00:41:07.280966 systemd[1]: Started cri-containerd-934a16583b75442292800affd1d9a1b5b8ad1785136a1bf9bb7652563286f648.scope - libcontainer container 934a16583b75442292800affd1d9a1b5b8ad1785136a1bf9bb7652563286f648. Mar 14 00:41:07.349343 systemd[1]: Started cri-containerd-64cfda8c758f03b7008654b0845324dc240a24e36a0cd345e04df6f1cdf74ece.scope - libcontainer container 64cfda8c758f03b7008654b0845324dc240a24e36a0cd345e04df6f1cdf74ece. Mar 14 00:41:07.357350 systemd[1]: Started cri-containerd-6f746792c1022803fe936534ae5582a123171c01ea21b54660d53f7fd039558d.scope - libcontainer container 6f746792c1022803fe936534ae5582a123171c01ea21b54660d53f7fd039558d. Mar 14 00:41:07.460315 containerd[1476]: time="2026-03-14T00:41:07.458566221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"934a16583b75442292800affd1d9a1b5b8ad1785136a1bf9bb7652563286f648\"" Mar 14 00:41:07.503764 kubelet[2189]: E0314 00:41:07.491004 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:07.667987 containerd[1476]: time="2026-03-14T00:41:07.667282403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f746792c1022803fe936534ae5582a123171c01ea21b54660d53f7fd039558d\"" Mar 14 00:41:07.731759 kubelet[2189]: E0314 00:41:07.729779 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:07.816743 containerd[1476]: time="2026-03-14T00:41:07.798217971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7c90966eba7fbe4f73b576151c42a06e,Namespace:kube-system,Attempt:0,} returns sandbox id \"64cfda8c758f03b7008654b0845324dc240a24e36a0cd345e04df6f1cdf74ece\"" Mar 14 00:41:07.835305 kubelet[2189]: I0314 00:41:07.809077 2189 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:41:07.891953 kubelet[2189]: E0314 00:41:07.823352 2189 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Mar 14 00:41:07.996536 kubelet[2189]: E0314 00:41:07.994972 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:08.007114 kubelet[2189]: E0314 00:41:08.006405 2189 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:41:08.014709 containerd[1476]: time="2026-03-14T00:41:08.014017458Z" level=info msg="CreateContainer within sandbox \"934a16583b75442292800affd1d9a1b5b8ad1785136a1bf9bb7652563286f648\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:41:08.019427 containerd[1476]: time="2026-03-14T00:41:08.019320865Z" level=info msg="CreateContainer within sandbox \"6f746792c1022803fe936534ae5582a123171c01ea21b54660d53f7fd039558d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:41:08.032388 containerd[1476]: time="2026-03-14T00:41:08.031951058Z" level=info msg="CreateContainer within sandbox \"64cfda8c758f03b7008654b0845324dc240a24e36a0cd345e04df6f1cdf74ece\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:41:08.103706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3798359925.mount: Deactivated successfully. Mar 14 00:41:08.148297 containerd[1476]: time="2026-03-14T00:41:08.147758807Z" level=info msg="CreateContainer within sandbox \"934a16583b75442292800affd1d9a1b5b8ad1785136a1bf9bb7652563286f648\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"241fe24923e97554e165c0eee03e5b1cdb0e9c0d974ec811e517000e41de38ab\"" Mar 14 00:41:08.154050 containerd[1476]: time="2026-03-14T00:41:08.153649862Z" level=info msg="StartContainer for \"241fe24923e97554e165c0eee03e5b1cdb0e9c0d974ec811e517000e41de38ab\"" Mar 14 00:41:08.188790 containerd[1476]: time="2026-03-14T00:41:08.188410554Z" level=info msg="CreateContainer within sandbox \"6f746792c1022803fe936534ae5582a123171c01ea21b54660d53f7fd039558d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eec1d40d38281379dfcb45e3e886237c3dfc0df88237ee33611ac2a6df5e0949\"" Mar 14 00:41:08.189991 containerd[1476]: time="2026-03-14T00:41:08.189326361Z" level=info msg="StartContainer for \"eec1d40d38281379dfcb45e3e886237c3dfc0df88237ee33611ac2a6df5e0949\"" Mar 14 00:41:08.197949 containerd[1476]: time="2026-03-14T00:41:08.197515195Z" level=info msg="CreateContainer within sandbox \"64cfda8c758f03b7008654b0845324dc240a24e36a0cd345e04df6f1cdf74ece\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"508b356bc93eb201d81a74a8ef6ffb50b00308d7651076c598ce03ef56b30d3c\"" Mar 14 00:41:08.200745 containerd[1476]: time="2026-03-14T00:41:08.198711065Z" level=info msg="StartContainer for \"508b356bc93eb201d81a74a8ef6ffb50b00308d7651076c598ce03ef56b30d3c\"" Mar 14 00:41:08.298271 systemd[1]: Started cri-containerd-241fe24923e97554e165c0eee03e5b1cdb0e9c0d974ec811e517000e41de38ab.scope - libcontainer container 241fe24923e97554e165c0eee03e5b1cdb0e9c0d974ec811e517000e41de38ab. Mar 14 00:41:08.321704 systemd[1]: Started cri-containerd-eec1d40d38281379dfcb45e3e886237c3dfc0df88237ee33611ac2a6df5e0949.scope - libcontainer container eec1d40d38281379dfcb45e3e886237c3dfc0df88237ee33611ac2a6df5e0949. Mar 14 00:41:08.339740 systemd[1]: Started cri-containerd-508b356bc93eb201d81a74a8ef6ffb50b00308d7651076c598ce03ef56b30d3c.scope - libcontainer container 508b356bc93eb201d81a74a8ef6ffb50b00308d7651076c598ce03ef56b30d3c. Mar 14 00:41:08.463169 containerd[1476]: time="2026-03-14T00:41:08.461658179Z" level=info msg="StartContainer for \"241fe24923e97554e165c0eee03e5b1cdb0e9c0d974ec811e517000e41de38ab\" returns successfully" Mar 14 00:41:08.463169 containerd[1476]: time="2026-03-14T00:41:08.462007149Z" level=info msg="StartContainer for \"eec1d40d38281379dfcb45e3e886237c3dfc0df88237ee33611ac2a6df5e0949\" returns successfully" Mar 14 00:41:08.506228 containerd[1476]: time="2026-03-14T00:41:08.506091945Z" level=info msg="StartContainer for \"508b356bc93eb201d81a74a8ef6ffb50b00308d7651076c598ce03ef56b30d3c\" returns successfully" Mar 14 00:41:09.031371 kubelet[2189]: E0314 00:41:09.031331 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:09.032527 kubelet[2189]: E0314 00:41:09.032285 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:09.034411 kubelet[2189]: E0314 00:41:09.034068 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:09.034411 kubelet[2189]: E0314 00:41:09.034349 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:09.039014 kubelet[2189]: E0314 00:41:09.038248 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:09.039014 kubelet[2189]: E0314 00:41:09.038471 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:09.497952 kubelet[2189]: I0314 00:41:09.497723 2189 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:41:10.042438 kubelet[2189]: E0314 00:41:10.042273 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:10.043563 kubelet[2189]: E0314 00:41:10.042474 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:10.044354 kubelet[2189]: E0314 00:41:10.044203 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:10.045348 kubelet[2189]: E0314 00:41:10.045319 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:10.047794 kubelet[2189]: E0314 00:41:10.047583 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:10.047794 kubelet[2189]: E0314 00:41:10.047693 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:11.048013 kubelet[2189]: E0314 00:41:11.047681 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:11.048013 kubelet[2189]: E0314 00:41:11.047923 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:11.052287 kubelet[2189]: E0314 00:41:11.051953 2189 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:41:11.052287 kubelet[2189]: E0314 00:41:11.052187 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:11.153406 kubelet[2189]: E0314 00:41:11.153288 2189 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 14 00:41:11.343917 kubelet[2189]: I0314 00:41:11.342004 2189 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:41:11.350469 kubelet[2189]: I0314 00:41:11.350373 2189 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:11.468767 kubelet[2189]: E0314 00:41:11.468506 2189 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:11.468767 kubelet[2189]: I0314 00:41:11.468546 2189 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:11.481776 kubelet[2189]: E0314 00:41:11.481250 2189 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:11.481776 kubelet[2189]: I0314 00:41:11.481356 2189 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:41:11.491314 kubelet[2189]: E0314 00:41:11.488011 2189 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 14 00:41:11.821941 kubelet[2189]: I0314 00:41:11.821191 2189 apiserver.go:52] "Watching apiserver" Mar 14 00:41:11.961712 kubelet[2189]: I0314 00:41:11.958365 2189 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:41:12.624677 kubelet[2189]: I0314 00:41:12.624465 2189 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:41:12.651491 kubelet[2189]: E0314 00:41:12.651335 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:13.052143 kubelet[2189]: E0314 00:41:13.051339 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:15.132925 kubelet[2189]: I0314 00:41:15.128919 2189 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:15.146439 systemd[1]: Reloading requested from client PID 2480 ('systemctl') (unit session-7.scope)... Mar 14 00:41:15.146519 systemd[1]: Reloading... Mar 14 00:41:15.161679 kubelet[2189]: E0314 00:41:15.161165 2189 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:15.344275 zram_generator::config[2519]: No configuration found. Mar 14 00:41:15.667762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:41:15.898399 systemd[1]: Reloading finished in 748 ms. Mar 14 00:41:16.012498 kubelet[2189]: I0314 00:41:16.012081 2189 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.012058435 podStartE2EDuration="1.012058435s" podCreationTimestamp="2026-03-14 00:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:41:16.011512818 +0000 UTC m=+10.913439412" watchObservedRunningTime="2026-03-14 00:41:16.012058435 +0000 UTC m=+10.913985008" Mar 14 00:41:16.024734 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:41:16.056434 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:41:16.058312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:41:16.058518 systemd[1]: kubelet.service: Consumed 3.245s CPU time, 129.2M memory peak, 0B memory swap peak. Mar 14 00:41:16.080590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:41:16.544987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:41:16.568445 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:41:16.730045 kubelet[2564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:41:16.783248 kubelet[2564]: I0314 00:41:16.782133 2564 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:41:16.783248 kubelet[2564]: I0314 00:41:16.782271 2564 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:41:16.783248 kubelet[2564]: I0314 00:41:16.782302 2564 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:41:16.783248 kubelet[2564]: I0314 00:41:16.782311 2564 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:41:16.783248 kubelet[2564]: I0314 00:41:16.782741 2564 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:41:16.790147 kubelet[2564]: I0314 00:41:16.789614 2564 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:41:16.797766 kubelet[2564]: I0314 00:41:16.797503 2564 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:41:16.811541 kubelet[2564]: E0314 00:41:16.810260 2564 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:41:16.811541 kubelet[2564]: I0314 00:41:16.810319 2564 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:41:16.819149 kubelet[2564]: I0314 00:41:16.818351 2564 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:41:16.819318 kubelet[2564]: I0314 00:41:16.819233 2564 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:41:16.820259 kubelet[2564]: I0314 00:41:16.819266 2564 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:41:16.820259 kubelet[2564]: I0314 00:41:16.819485 2564 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:41:16.820259 kubelet[2564]: I0314 00:41:16.819498 2564 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:41:16.820259 kubelet[2564]: I0314 00:41:16.819529 2564 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:41:16.820259 kubelet[2564]: I0314 00:41:16.820030 2564 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:41:16.821260 kubelet[2564]: I0314 00:41:16.820745 2564 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:41:16.821260 kubelet[2564]: I0314 00:41:16.820775 2564 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:41:16.823129 kubelet[2564]: I0314 00:41:16.820800 2564 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:41:16.825015 kubelet[2564]: I0314 00:41:16.824737 2564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:41:16.832454 kubelet[2564]: I0314 00:41:16.832336 2564 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:41:16.845575 kubelet[2564]: I0314 00:41:16.844998 2564 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:41:16.845575 kubelet[2564]: I0314 00:41:16.845098 2564 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:41:16.861649 kubelet[2564]: I0314 00:41:16.860425 2564 server.go:1257] "Started kubelet" Mar 14 00:41:16.861649 kubelet[2564]: I0314 00:41:16.860712 2564 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:41:16.861649 kubelet[2564]: I0314 00:41:16.860761 2564 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:41:16.861649 kubelet[2564]: I0314 00:41:16.861353 2564 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:41:16.864749 kubelet[2564]: I0314 00:41:16.864322 2564 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:41:16.874626 kubelet[2564]: I0314 00:41:16.872738 2564 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:41:16.876318 kubelet[2564]: I0314 00:41:16.876251 2564 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:41:16.880134 kubelet[2564]: I0314 00:41:16.878504 2564 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:41:16.880134 kubelet[2564]: I0314 00:41:16.878603 2564 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:41:16.880134 kubelet[2564]: I0314 00:41:16.878780 2564 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:41:16.880419 kubelet[2564]: I0314 00:41:16.880398 2564 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:41:16.882749 kubelet[2564]: I0314 00:41:16.882223 2564 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:41:16.899050 kubelet[2564]: E0314 00:41:16.896366 2564 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:41:16.899050 kubelet[2564]: I0314 00:41:16.896576 2564 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:41:16.899050 kubelet[2564]: I0314 00:41:16.896592 2564 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:41:16.964134 kubelet[2564]: I0314 00:41:16.964024 2564 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:41:16.982117 kubelet[2564]: I0314 00:41:16.982075 2564 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:41:16.982576 kubelet[2564]: I0314 00:41:16.982299 2564 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:41:16.982734 kubelet[2564]: I0314 00:41:16.982721 2564 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:41:16.984148 kubelet[2564]: E0314 00:41:16.983694 2564 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:41:17.087761 kubelet[2564]: E0314 00:41:17.084451 2564 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:41:17.133101 kubelet[2564]: I0314 00:41:17.132670 2564 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:41:17.133287 kubelet[2564]: I0314 00:41:17.133187 2564 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:41:17.133287 kubelet[2564]: I0314 00:41:17.133217 2564 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:41:17.136736 kubelet[2564]: I0314 00:41:17.133391 2564 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:41:17.136736 kubelet[2564]: I0314 00:41:17.133406 2564 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:41:17.136736 kubelet[2564]: I0314 00:41:17.133427 2564 policy_none.go:50] "Start" Mar 14 00:41:17.136736 kubelet[2564]: I0314 00:41:17.133439 2564 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:41:17.136736 kubelet[2564]: I0314 00:41:17.133455 2564 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:41:17.136736 kubelet[2564]: I0314 00:41:17.133590 2564 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:41:17.136736 kubelet[2564]: I0314 00:41:17.133601 2564 policy_none.go:44] "Start" Mar 14 00:41:17.154186 kubelet[2564]: E0314 00:41:17.154122 2564 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:41:17.154400 kubelet[2564]: I0314 00:41:17.154384 2564 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:41:17.154452 kubelet[2564]: I0314 00:41:17.154402 2564 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:41:17.157140 kubelet[2564]: I0314 00:41:17.156480 2564 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:41:17.162406 kubelet[2564]: E0314 00:41:17.160103 2564 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:41:17.291639 kubelet[2564]: I0314 00:41:17.291605 2564 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:41:17.301611 kubelet[2564]: I0314 00:41:17.293719 2564 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:17.302042 kubelet[2564]: I0314 00:41:17.302015 2564 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:17.321677 kubelet[2564]: I0314 00:41:17.321636 2564 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:41:17.332602 kubelet[2564]: E0314 00:41:17.331551 2564 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:41:17.360117 kubelet[2564]: E0314 00:41:17.358653 2564 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:17.371336 kubelet[2564]: I0314 00:41:17.371296 2564 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 14 00:41:17.373004 kubelet[2564]: I0314 00:41:17.371572 2564 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:41:17.398760 kubelet[2564]: I0314 00:41:17.398438 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:17.398760 kubelet[2564]: I0314 00:41:17.398741 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:17.398760 kubelet[2564]: I0314 00:41:17.398782 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:17.399450 kubelet[2564]: I0314 00:41:17.399001 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:41:17.399450 kubelet[2564]: I0314 00:41:17.399036 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c90966eba7fbe4f73b576151c42a06e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c90966eba7fbe4f73b576151c42a06e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:17.399450 kubelet[2564]: I0314 00:41:17.399059 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c90966eba7fbe4f73b576151c42a06e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7c90966eba7fbe4f73b576151c42a06e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:17.399450 kubelet[2564]: I0314 00:41:17.399088 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c90966eba7fbe4f73b576151c42a06e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7c90966eba7fbe4f73b576151c42a06e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:41:17.399450 kubelet[2564]: I0314 00:41:17.399113 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:17.399647 kubelet[2564]: I0314 00:41:17.399155 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:41:17.637475 kubelet[2564]: E0314 00:41:17.635657 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:17.637475 kubelet[2564]: E0314 00:41:17.635791 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:17.669281 kubelet[2564]: E0314 00:41:17.666604 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:17.828712 kubelet[2564]: I0314 00:41:17.828246 2564 apiserver.go:52] "Watching apiserver" Mar 14 00:41:17.879060 kubelet[2564]: I0314 00:41:17.878788 2564 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:41:18.027544 kubelet[2564]: E0314 00:41:18.027237 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:18.031073 kubelet[2564]: E0314 00:41:18.029470 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:18.031073 kubelet[2564]: E0314 00:41:18.030135 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:18.051471 kubelet[2564]: I0314 00:41:18.051393 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.051373292 podStartE2EDuration="1.051373292s" podCreationTimestamp="2026-03-14 00:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:41:18.00332006 +0000 UTC m=+1.411862048" watchObservedRunningTime="2026-03-14 00:41:18.051373292 +0000 UTC m=+1.459915280" Mar 14 00:41:19.034634 kubelet[2564]: E0314 00:41:19.034528 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:19.035777 kubelet[2564]: E0314 00:41:19.035100 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:19.466721 kubelet[2564]: I0314 00:41:19.466628 2564 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:41:19.467529 containerd[1476]: time="2026-03-14T00:41:19.467451251Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:41:19.468184 kubelet[2564]: I0314 00:41:19.468010 2564 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:41:20.039537 kubelet[2564]: E0314 00:41:20.039340 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:20.041296 kubelet[2564]: E0314 00:41:20.040393 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:21.106071 kubelet[2564]: I0314 00:41:21.105114 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7123c2e-d2fb-4d6a-9242-75a672275ac6-xtables-lock\") pod \"kube-proxy-chdbk\" (UID: \"c7123c2e-d2fb-4d6a-9242-75a672275ac6\") " pod="kube-system/kube-proxy-chdbk" Mar 14 00:41:21.106071 kubelet[2564]: I0314 00:41:21.109464 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7123c2e-d2fb-4d6a-9242-75a672275ac6-lib-modules\") pod \"kube-proxy-chdbk\" (UID: \"c7123c2e-d2fb-4d6a-9242-75a672275ac6\") " pod="kube-system/kube-proxy-chdbk" Mar 14 00:41:21.106071 kubelet[2564]: I0314 00:41:21.109631 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6rk\" (UniqueName: \"kubernetes.io/projected/c7123c2e-d2fb-4d6a-9242-75a672275ac6-kube-api-access-pp6rk\") pod \"kube-proxy-chdbk\" (UID: \"c7123c2e-d2fb-4d6a-9242-75a672275ac6\") " pod="kube-system/kube-proxy-chdbk" Mar 14 00:41:21.106071 kubelet[2564]: I0314 00:41:21.109677 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7123c2e-d2fb-4d6a-9242-75a672275ac6-kube-proxy\") pod \"kube-proxy-chdbk\" (UID: \"c7123c2e-d2fb-4d6a-9242-75a672275ac6\") " pod="kube-system/kube-proxy-chdbk" Mar 14 00:41:21.339604 systemd[1]: Created slice kubepods-besteffort-podc7123c2e_d2fb_4d6a_9242_75a672275ac6.slice - libcontainer container kubepods-besteffort-podc7123c2e_d2fb_4d6a_9242_75a672275ac6.slice. Mar 14 00:41:22.077374 kubelet[2564]: E0314 00:41:22.075345 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:22.509019 containerd[1476]: time="2026-03-14T00:41:22.491533581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chdbk,Uid:c7123c2e-d2fb-4d6a-9242-75a672275ac6,Namespace:kube-system,Attempt:0,}" Mar 14 00:41:22.599685 kubelet[2564]: I0314 00:41:22.597298 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f94b30f-82f3-429b-b1e0-f55cd0ef6451-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-r7sv9\" (UID: \"2f94b30f-82f3-429b-b1e0-f55cd0ef6451\") " pod="tigera-operator/tigera-operator-6cf4cccc57-r7sv9" Mar 14 00:41:22.622577 kubelet[2564]: I0314 00:41:22.620467 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxzb\" (UniqueName: \"kubernetes.io/projected/2f94b30f-82f3-429b-b1e0-f55cd0ef6451-kube-api-access-2wxzb\") pod \"tigera-operator-6cf4cccc57-r7sv9\" (UID: \"2f94b30f-82f3-429b-b1e0-f55cd0ef6451\") " pod="tigera-operator/tigera-operator-6cf4cccc57-r7sv9" Mar 14 00:41:22.899310 systemd[1]: Created slice kubepods-besteffort-pod2f94b30f_82f3_429b_b1e0_f55cd0ef6451.slice - libcontainer container kubepods-besteffort-pod2f94b30f_82f3_429b_b1e0_f55cd0ef6451.slice. Mar 14 00:41:23.132964 containerd[1476]: time="2026-03-14T00:41:23.112800590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:23.132964 containerd[1476]: time="2026-03-14T00:41:23.117685444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:23.132964 containerd[1476]: time="2026-03-14T00:41:23.119001853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:23.132964 containerd[1476]: time="2026-03-14T00:41:23.120120427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:23.334050 containerd[1476]: time="2026-03-14T00:41:23.306023039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-r7sv9,Uid:2f94b30f-82f3-429b-b1e0-f55cd0ef6451,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:41:23.499943 systemd[1]: Started cri-containerd-9740c3e03ac598030d18cbd754f45628dd459640b3b918d3db1ef355e78c07b7.scope - libcontainer container 9740c3e03ac598030d18cbd754f45628dd459640b3b918d3db1ef355e78c07b7. Mar 14 00:41:23.988697 containerd[1476]: time="2026-03-14T00:41:23.971473089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:23.988697 containerd[1476]: time="2026-03-14T00:41:23.972059767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:23.988697 containerd[1476]: time="2026-03-14T00:41:23.972088531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:23.988697 containerd[1476]: time="2026-03-14T00:41:23.977153137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:26.004500 containerd[1476]: time="2026-03-14T00:41:26.003577539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chdbk,Uid:c7123c2e-d2fb-4d6a-9242-75a672275ac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9740c3e03ac598030d18cbd754f45628dd459640b3b918d3db1ef355e78c07b7\"" Mar 14 00:41:26.185645 kubelet[2564]: E0314 00:41:26.182091 2564 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.187s" Mar 14 00:41:26.310655 kubelet[2564]: E0314 00:41:26.190443 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:26.322115 kubelet[2564]: E0314 00:41:26.309800 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:28.281439 kubelet[2564]: E0314 00:41:28.275596 2564 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.278s" Mar 14 00:41:28.281439 kubelet[2564]: E0314 00:41:28.278329 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:28.385909 containerd[1476]: time="2026-03-14T00:41:28.384688869Z" level=info msg="CreateContainer within sandbox \"9740c3e03ac598030d18cbd754f45628dd459640b3b918d3db1ef355e78c07b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:41:28.434642 systemd[1]: Started cri-containerd-df79755889abae09313996f244fd5347874f0172e0e5930eb6fba1615691b259.scope - libcontainer container df79755889abae09313996f244fd5347874f0172e0e5930eb6fba1615691b259. Mar 14 00:41:28.484375 kubelet[2564]: E0314 00:41:28.481959 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:29.022160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280406159.mount: Deactivated successfully. Mar 14 00:41:29.107704 containerd[1476]: time="2026-03-14T00:41:29.106938392Z" level=info msg="CreateContainer within sandbox \"9740c3e03ac598030d18cbd754f45628dd459640b3b918d3db1ef355e78c07b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84dd518630918dd634e559b74e153de289bb6d1311828db83e1fd005df2b0e38\"" Mar 14 00:41:29.119907 containerd[1476]: time="2026-03-14T00:41:29.114322596Z" level=info msg="StartContainer for \"84dd518630918dd634e559b74e153de289bb6d1311828db83e1fd005df2b0e38\"" Mar 14 00:41:29.275606 containerd[1476]: time="2026-03-14T00:41:29.275435682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-r7sv9,Uid:2f94b30f-82f3-429b-b1e0-f55cd0ef6451,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"df79755889abae09313996f244fd5347874f0172e0e5930eb6fba1615691b259\"" Mar 14 00:41:29.281504 containerd[1476]: time="2026-03-14T00:41:29.280958473Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:41:29.422462 systemd[1]: Started cri-containerd-84dd518630918dd634e559b74e153de289bb6d1311828db83e1fd005df2b0e38.scope - libcontainer container 84dd518630918dd634e559b74e153de289bb6d1311828db83e1fd005df2b0e38. Mar 14 00:41:29.560931 containerd[1476]: time="2026-03-14T00:41:29.560356377Z" level=info msg="StartContainer for \"84dd518630918dd634e559b74e153de289bb6d1311828db83e1fd005df2b0e38\" returns successfully" Mar 14 00:41:30.448532 kubelet[2564]: E0314 00:41:30.448496 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:31.073107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount614094840.mount: Deactivated successfully. Mar 14 00:41:31.459925 kubelet[2564]: E0314 00:41:31.456877 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:35.748469 kubelet[2564]: E0314 00:41:35.742457 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:35.881406 kubelet[2564]: I0314 00:41:35.880443 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-chdbk" podStartSLOduration=15.880428163 podStartE2EDuration="15.880428163s" podCreationTimestamp="2026-03-14 00:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:41:30.484644811 +0000 UTC m=+13.893186799" watchObservedRunningTime="2026-03-14 00:41:35.880428163 +0000 UTC m=+19.288970172" Mar 14 00:41:37.657230 kubelet[2564]: E0314 00:41:37.656440 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:39.892437 containerd[1476]: time="2026-03-14T00:41:39.880275754Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:41:39.911477 containerd[1476]: time="2026-03-14T00:41:39.909202658Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 14 00:41:39.916933 containerd[1476]: time="2026-03-14T00:41:39.913662350Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:41:39.940116 containerd[1476]: time="2026-03-14T00:41:39.939073562Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:41:39.960918 containerd[1476]: time="2026-03-14T00:41:39.959050665Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 10.678033165s" Mar 14 00:41:39.960918 containerd[1476]: time="2026-03-14T00:41:39.959115734Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 14 00:41:39.993866 containerd[1476]: time="2026-03-14T00:41:39.993233940Z" level=info msg="CreateContainer within sandbox \"df79755889abae09313996f244fd5347874f0172e0e5930eb6fba1615691b259\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:41:40.048518 containerd[1476]: time="2026-03-14T00:41:40.045652645Z" level=info msg="CreateContainer within sandbox \"df79755889abae09313996f244fd5347874f0172e0e5930eb6fba1615691b259\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"923263f700e58e51ad46f2635e766ac9a657c2b5d06cd13b30829c363f3bb8f4\"" Mar 14 00:41:40.056291 containerd[1476]: time="2026-03-14T00:41:40.056250871Z" level=info msg="StartContainer for \"923263f700e58e51ad46f2635e766ac9a657c2b5d06cd13b30829c363f3bb8f4\"" Mar 14 00:41:40.484076 systemd[1]: run-containerd-runc-k8s.io-923263f700e58e51ad46f2635e766ac9a657c2b5d06cd13b30829c363f3bb8f4-runc.VbJdm1.mount: Deactivated successfully. Mar 14 00:41:40.526321 systemd[1]: Started cri-containerd-923263f700e58e51ad46f2635e766ac9a657c2b5d06cd13b30829c363f3bb8f4.scope - libcontainer container 923263f700e58e51ad46f2635e766ac9a657c2b5d06cd13b30829c363f3bb8f4. Mar 14 00:41:40.777945 containerd[1476]: time="2026-03-14T00:41:40.776163391Z" level=info msg="StartContainer for \"923263f700e58e51ad46f2635e766ac9a657c2b5d06cd13b30829c363f3bb8f4\" returns successfully" Mar 14 00:41:50.218208 sudo[1644]: pam_unix(sudo:session): session closed for user root Mar 14 00:41:50.223566 sshd[1641]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:50.232125 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:53404.service: Deactivated successfully. Mar 14 00:41:50.237069 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:41:50.237504 systemd[1]: session-7.scope: Consumed 9.563s CPU time, 158.6M memory peak, 0B memory swap peak. Mar 14 00:41:50.240548 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:41:50.245527 systemd-logind[1459]: Removed session 7. Mar 14 00:41:55.241310 kubelet[2564]: I0314 00:41:55.241124 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-r7sv9" podStartSLOduration=24.558209151 podStartE2EDuration="35.241109289s" podCreationTimestamp="2026-03-14 00:41:20 +0000 UTC" firstStartedPulling="2026-03-14 00:41:29.280439448 +0000 UTC m=+12.688981457" lastFinishedPulling="2026-03-14 00:41:39.963339608 +0000 UTC m=+23.371881595" observedRunningTime="2026-03-14 00:41:41.094485382 +0000 UTC m=+24.503027380" watchObservedRunningTime="2026-03-14 00:41:55.241109289 +0000 UTC m=+38.649651297" Mar 14 00:41:55.395525 systemd[1]: Created slice kubepods-besteffort-podaed79cd2_12b0_4800_aca9_6e120f77e6d3.slice - libcontainer container kubepods-besteffort-podaed79cd2_12b0_4800_aca9_6e120f77e6d3.slice. Mar 14 00:41:55.521883 kubelet[2564]: I0314 00:41:55.521348 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aed79cd2-12b0-4800-aca9-6e120f77e6d3-tigera-ca-bundle\") pod \"calico-typha-55d48cfd56-qclsc\" (UID: \"aed79cd2-12b0-4800-aca9-6e120f77e6d3\") " pod="calico-system/calico-typha-55d48cfd56-qclsc" Mar 14 00:41:55.521883 kubelet[2564]: I0314 00:41:55.521759 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc25h\" (UniqueName: \"kubernetes.io/projected/aed79cd2-12b0-4800-aca9-6e120f77e6d3-kube-api-access-vc25h\") pod \"calico-typha-55d48cfd56-qclsc\" (UID: \"aed79cd2-12b0-4800-aca9-6e120f77e6d3\") " pod="calico-system/calico-typha-55d48cfd56-qclsc" Mar 14 00:41:55.521883 kubelet[2564]: I0314 00:41:55.521800 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aed79cd2-12b0-4800-aca9-6e120f77e6d3-typha-certs\") pod \"calico-typha-55d48cfd56-qclsc\" (UID: \"aed79cd2-12b0-4800-aca9-6e120f77e6d3\") " pod="calico-system/calico-typha-55d48cfd56-qclsc" Mar 14 00:41:55.569019 systemd[1]: Created slice kubepods-besteffort-podf0c2a2ca_eae5_473a_8632_91068d6ec1e5.slice - libcontainer container kubepods-besteffort-podf0c2a2ca_eae5_473a_8632_91068d6ec1e5.slice. Mar 14 00:41:55.626381 kubelet[2564]: I0314 00:41:55.622624 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-policysync\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.626381 kubelet[2564]: I0314 00:41:55.622780 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-xtables-lock\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.626381 kubelet[2564]: I0314 00:41:55.622933 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-node-certs\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.626381 kubelet[2564]: I0314 00:41:55.622973 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-cni-log-dir\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.626381 kubelet[2564]: I0314 00:41:55.622998 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-sys-fs\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627018 kubelet[2564]: I0314 00:41:55.623021 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-tigera-ca-bundle\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627018 kubelet[2564]: I0314 00:41:55.623041 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-var-lib-calico\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627018 kubelet[2564]: I0314 00:41:55.623063 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-bpffs\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627018 kubelet[2564]: I0314 00:41:55.623084 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-var-run-calico\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627018 kubelet[2564]: I0314 00:41:55.623108 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-cni-bin-dir\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627275 kubelet[2564]: I0314 00:41:55.623130 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-cni-net-dir\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627275 kubelet[2564]: I0314 00:41:55.623181 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-flexvol-driver-host\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627275 kubelet[2564]: I0314 00:41:55.623281 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-lib-modules\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627275 kubelet[2564]: I0314 00:41:55.623311 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-nodeproc\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.627275 kubelet[2564]: I0314 00:41:55.623525 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6tx\" (UniqueName: \"kubernetes.io/projected/f0c2a2ca-eae5-473a-8632-91068d6ec1e5-kube-api-access-zm6tx\") pod \"calico-node-kzvpw\" (UID: \"f0c2a2ca-eae5-473a-8632-91068d6ec1e5\") " pod="calico-system/calico-node-kzvpw" Mar 14 00:41:55.674359 kubelet[2564]: E0314 00:41:55.674132 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:41:55.718925 kubelet[2564]: E0314 00:41:55.715282 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:55.719456 containerd[1476]: time="2026-03-14T00:41:55.719339677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d48cfd56-qclsc,Uid:aed79cd2-12b0-4800-aca9-6e120f77e6d3,Namespace:calico-system,Attempt:0,}" Mar 14 00:41:55.724556 kubelet[2564]: I0314 00:41:55.724408 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fec3ef51-27dd-462a-9b02-64ae702e6505-varrun\") pod \"csi-node-driver-6px7x\" (UID: \"fec3ef51-27dd-462a-9b02-64ae702e6505\") " pod="calico-system/csi-node-driver-6px7x" Mar 14 00:41:55.724677 kubelet[2564]: I0314 00:41:55.724608 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fec3ef51-27dd-462a-9b02-64ae702e6505-kubelet-dir\") pod \"csi-node-driver-6px7x\" (UID: \"fec3ef51-27dd-462a-9b02-64ae702e6505\") " pod="calico-system/csi-node-driver-6px7x" Mar 14 00:41:55.724677 kubelet[2564]: I0314 00:41:55.724633 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fec3ef51-27dd-462a-9b02-64ae702e6505-registration-dir\") pod \"csi-node-driver-6px7x\" (UID: \"fec3ef51-27dd-462a-9b02-64ae702e6505\") " pod="calico-system/csi-node-driver-6px7x" Mar 14 00:41:55.724731 kubelet[2564]: I0314 00:41:55.724680 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76npt\" (UniqueName: \"kubernetes.io/projected/fec3ef51-27dd-462a-9b02-64ae702e6505-kube-api-access-76npt\") pod \"csi-node-driver-6px7x\" (UID: \"fec3ef51-27dd-462a-9b02-64ae702e6505\") " pod="calico-system/csi-node-driver-6px7x" Mar 14 00:41:55.724731 kubelet[2564]: I0314 00:41:55.724703 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fec3ef51-27dd-462a-9b02-64ae702e6505-socket-dir\") pod \"csi-node-driver-6px7x\" (UID: \"fec3ef51-27dd-462a-9b02-64ae702e6505\") " pod="calico-system/csi-node-driver-6px7x" Mar 14 00:41:55.735002 kubelet[2564]: E0314 00:41:55.734950 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.735109 kubelet[2564]: W0314 00:41:55.735008 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.735109 kubelet[2564]: E0314 00:41:55.735083 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.757605 kubelet[2564]: E0314 00:41:55.757517 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.757605 kubelet[2564]: W0314 00:41:55.757546 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.757605 kubelet[2564]: E0314 00:41:55.757570 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.799501 containerd[1476]: time="2026-03-14T00:41:55.798970560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:55.800149 containerd[1476]: time="2026-03-14T00:41:55.799790647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:55.800476 containerd[1476]: time="2026-03-14T00:41:55.800018505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:55.801617 containerd[1476]: time="2026-03-14T00:41:55.801124388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:55.828189 kubelet[2564]: E0314 00:41:55.828104 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.828189 kubelet[2564]: W0314 00:41:55.828170 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.828375 kubelet[2564]: E0314 00:41:55.828197 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.829132 kubelet[2564]: E0314 00:41:55.828982 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.829203 kubelet[2564]: W0314 00:41:55.829153 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.829481 kubelet[2564]: E0314 00:41:55.829175 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.831481 kubelet[2564]: E0314 00:41:55.831432 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.831555 kubelet[2564]: W0314 00:41:55.831481 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.831555 kubelet[2564]: E0314 00:41:55.831503 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.832573 kubelet[2564]: E0314 00:41:55.832547 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.832573 kubelet[2564]: W0314 00:41:55.832560 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.832573 kubelet[2564]: E0314 00:41:55.832572 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.835322 kubelet[2564]: E0314 00:41:55.835154 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.836179 kubelet[2564]: W0314 00:41:55.835974 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.836179 kubelet[2564]: E0314 00:41:55.836002 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.836516 kubelet[2564]: E0314 00:41:55.836499 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.836607 kubelet[2564]: W0314 00:41:55.836589 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.836906 kubelet[2564]: E0314 00:41:55.836734 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.837515 kubelet[2564]: E0314 00:41:55.837498 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.837888 kubelet[2564]: W0314 00:41:55.837595 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.837888 kubelet[2564]: E0314 00:41:55.837615 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.838618 kubelet[2564]: E0314 00:41:55.838599 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.839143 kubelet[2564]: W0314 00:41:55.838916 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.839143 kubelet[2564]: E0314 00:41:55.838944 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.840770 kubelet[2564]: E0314 00:41:55.840748 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.841907 kubelet[2564]: W0314 00:41:55.841334 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.841907 kubelet[2564]: E0314 00:41:55.841357 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.842624 kubelet[2564]: E0314 00:41:55.842534 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.842624 kubelet[2564]: W0314 00:41:55.842599 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.842624 kubelet[2564]: E0314 00:41:55.842618 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.845089 kubelet[2564]: E0314 00:41:55.845070 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.845566 kubelet[2564]: W0314 00:41:55.845179 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.845566 kubelet[2564]: E0314 00:41:55.845200 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.849423 kubelet[2564]: E0314 00:41:55.849074 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.849423 kubelet[2564]: W0314 00:41:55.849093 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.849423 kubelet[2564]: E0314 00:41:55.849108 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.849941 kubelet[2564]: E0314 00:41:55.849679 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.849941 kubelet[2564]: W0314 00:41:55.849694 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.849941 kubelet[2564]: E0314 00:41:55.849707 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.851716 kubelet[2564]: E0314 00:41:55.850309 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.851716 kubelet[2564]: W0314 00:41:55.850329 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.851716 kubelet[2564]: E0314 00:41:55.850344 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.854895 kubelet[2564]: E0314 00:41:55.853156 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.854895 kubelet[2564]: W0314 00:41:55.853174 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.854895 kubelet[2564]: E0314 00:41:55.853190 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.854895 kubelet[2564]: E0314 00:41:55.854377 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.854895 kubelet[2564]: W0314 00:41:55.854392 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.854895 kubelet[2564]: E0314 00:41:55.854408 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.855329 kubelet[2564]: E0314 00:41:55.855172 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.855329 kubelet[2564]: W0314 00:41:55.855295 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.855329 kubelet[2564]: E0314 00:41:55.855313 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.856712 kubelet[2564]: E0314 00:41:55.856589 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.856712 kubelet[2564]: W0314 00:41:55.856676 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.856712 kubelet[2564]: E0314 00:41:55.856714 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.860605 kubelet[2564]: E0314 00:41:55.860493 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.860605 kubelet[2564]: W0314 00:41:55.860567 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.860605 kubelet[2564]: E0314 00:41:55.860601 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.862365 kubelet[2564]: E0314 00:41:55.862274 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.862440 kubelet[2564]: W0314 00:41:55.862406 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.862440 kubelet[2564]: E0314 00:41:55.862434 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.866269 kubelet[2564]: E0314 00:41:55.865680 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.866269 kubelet[2564]: W0314 00:41:55.865705 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.866269 kubelet[2564]: E0314 00:41:55.865731 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.869548 kubelet[2564]: E0314 00:41:55.869516 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.869678 kubelet[2564]: W0314 00:41:55.869651 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.869774 kubelet[2564]: E0314 00:41:55.869753 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.872301 kubelet[2564]: E0314 00:41:55.872176 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.872301 kubelet[2564]: W0314 00:41:55.872278 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.872423 kubelet[2564]: E0314 00:41:55.872305 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.873480 kubelet[2564]: E0314 00:41:55.873276 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.873480 kubelet[2564]: W0314 00:41:55.873333 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.873480 kubelet[2564]: E0314 00:41:55.873355 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.875198 kubelet[2564]: E0314 00:41:55.874907 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.875714 kubelet[2564]: W0314 00:41:55.875544 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.877644 kubelet[2564]: E0314 00:41:55.877316 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.879513 systemd[1]: Started cri-containerd-815c157c0ecfbbba54caea7443c324937364a4dcc5808ae2715c158c6f358846.scope - libcontainer container 815c157c0ecfbbba54caea7443c324937364a4dcc5808ae2715c158c6f358846. Mar 14 00:41:55.896156 containerd[1476]: time="2026-03-14T00:41:55.896070630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kzvpw,Uid:f0c2a2ca-eae5-473a-8632-91068d6ec1e5,Namespace:calico-system,Attempt:0,}" Mar 14 00:41:55.914451 kubelet[2564]: E0314 00:41:55.914310 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:41:55.914451 kubelet[2564]: W0314 00:41:55.914335 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:41:55.914451 kubelet[2564]: E0314 00:41:55.914362 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:41:55.989412 containerd[1476]: time="2026-03-14T00:41:55.988263890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:55.989412 containerd[1476]: time="2026-03-14T00:41:55.988621758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:55.989412 containerd[1476]: time="2026-03-14T00:41:55.988776452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:55.994171 containerd[1476]: time="2026-03-14T00:41:55.992594437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:56.021652 containerd[1476]: time="2026-03-14T00:41:56.021597931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55d48cfd56-qclsc,Uid:aed79cd2-12b0-4800-aca9-6e120f77e6d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"815c157c0ecfbbba54caea7443c324937364a4dcc5808ae2715c158c6f358846\"" Mar 14 00:41:56.024761 kubelet[2564]: E0314 00:41:56.024503 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:56.028743 containerd[1476]: time="2026-03-14T00:41:56.028409190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:41:56.065364 systemd[1]: Started cri-containerd-290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea.scope - libcontainer container 290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea. Mar 14 00:41:56.156703 containerd[1476]: time="2026-03-14T00:41:56.156327653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kzvpw,Uid:f0c2a2ca-eae5-473a-8632-91068d6ec1e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\"" Mar 14 00:41:56.985563 kubelet[2564]: E0314 00:41:56.985498 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:41:57.419695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734880636.mount: Deactivated successfully. Mar 14 00:41:58.985656 kubelet[2564]: E0314 00:41:58.985600 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:00.161009 containerd[1476]: time="2026-03-14T00:42:00.160728044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:00.164205 containerd[1476]: time="2026-03-14T00:42:00.163959536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 14 00:42:00.167702 containerd[1476]: time="2026-03-14T00:42:00.167663983Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:00.175082 containerd[1476]: time="2026-03-14T00:42:00.174763895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:00.176502 containerd[1476]: time="2026-03-14T00:42:00.176296761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.147827791s" Mar 14 00:42:00.176502 containerd[1476]: time="2026-03-14T00:42:00.176364466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 14 00:42:00.181991 containerd[1476]: time="2026-03-14T00:42:00.180703692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:42:00.222976 containerd[1476]: time="2026-03-14T00:42:00.222672403Z" level=info msg="CreateContainer within sandbox \"815c157c0ecfbbba54caea7443c324937364a4dcc5808ae2715c158c6f358846\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:42:00.280790 containerd[1476]: time="2026-03-14T00:42:00.280615222Z" level=info msg="CreateContainer within sandbox \"815c157c0ecfbbba54caea7443c324937364a4dcc5808ae2715c158c6f358846\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa6530900ad9e0522c6c4b2c09e8991bed28cba52e262013955f4d4ee3241fd7\"" Mar 14 00:42:00.284896 containerd[1476]: time="2026-03-14T00:42:00.282435790Z" level=info msg="StartContainer for \"aa6530900ad9e0522c6c4b2c09e8991bed28cba52e262013955f4d4ee3241fd7\"" Mar 14 00:42:00.425908 systemd[1]: Started cri-containerd-aa6530900ad9e0522c6c4b2c09e8991bed28cba52e262013955f4d4ee3241fd7.scope - libcontainer container aa6530900ad9e0522c6c4b2c09e8991bed28cba52e262013955f4d4ee3241fd7. Mar 14 00:42:00.709510 containerd[1476]: time="2026-03-14T00:42:00.708222416Z" level=info msg="StartContainer for \"aa6530900ad9e0522c6c4b2c09e8991bed28cba52e262013955f4d4ee3241fd7\" returns successfully" Mar 14 00:42:00.989043 kubelet[2564]: E0314 00:42:00.986154 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:01.114964 kubelet[2564]: E0314 00:42:01.114013 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:01.163637 kubelet[2564]: E0314 00:42:01.163486 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.163637 kubelet[2564]: W0314 00:42:01.163574 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.163637 kubelet[2564]: E0314 00:42:01.163603 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.164518 kubelet[2564]: E0314 00:42:01.164405 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.164518 kubelet[2564]: W0314 00:42:01.164494 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.166644 kubelet[2564]: E0314 00:42:01.164579 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.166644 kubelet[2564]: E0314 00:42:01.165211 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.166644 kubelet[2564]: W0314 00:42:01.165229 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.166644 kubelet[2564]: E0314 00:42:01.165250 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.166644 kubelet[2564]: E0314 00:42:01.165624 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.166644 kubelet[2564]: W0314 00:42:01.165638 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.166644 kubelet[2564]: E0314 00:42:01.165655 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.167034 kubelet[2564]: E0314 00:42:01.166932 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.167034 kubelet[2564]: W0314 00:42:01.166947 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.167034 kubelet[2564]: E0314 00:42:01.166967 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.171651 kubelet[2564]: E0314 00:42:01.171542 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.171651 kubelet[2564]: W0314 00:42:01.171626 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.171975 kubelet[2564]: E0314 00:42:01.171661 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.175294 kubelet[2564]: E0314 00:42:01.175057 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.175294 kubelet[2564]: W0314 00:42:01.175193 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.175294 kubelet[2564]: E0314 00:42:01.175225 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.179067 kubelet[2564]: E0314 00:42:01.178956 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.179067 kubelet[2564]: W0314 00:42:01.179032 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.179067 kubelet[2564]: E0314 00:42:01.179061 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.180004 kubelet[2564]: E0314 00:42:01.179926 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.180004 kubelet[2564]: W0314 00:42:01.179987 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.180004 kubelet[2564]: E0314 00:42:01.180008 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.189576 kubelet[2564]: E0314 00:42:01.181600 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.189576 kubelet[2564]: W0314 00:42:01.181618 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.189576 kubelet[2564]: E0314 00:42:01.181640 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.189576 kubelet[2564]: E0314 00:42:01.182374 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.189576 kubelet[2564]: W0314 00:42:01.182392 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.189576 kubelet[2564]: E0314 00:42:01.182411 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.189576 kubelet[2564]: E0314 00:42:01.182952 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.189576 kubelet[2564]: W0314 00:42:01.182967 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.189576 kubelet[2564]: E0314 00:42:01.182985 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.200196 kubelet[2564]: E0314 00:42:01.194037 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.200196 kubelet[2564]: W0314 00:42:01.194076 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.200196 kubelet[2564]: E0314 00:42:01.194176 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.207966 kubelet[2564]: E0314 00:42:01.203190 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.207966 kubelet[2564]: W0314 00:42:01.203232 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.207966 kubelet[2564]: E0314 00:42:01.203303 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.207966 kubelet[2564]: E0314 00:42:01.205582 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.207966 kubelet[2564]: W0314 00:42:01.205609 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.207966 kubelet[2564]: E0314 00:42:01.205640 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.226634 kubelet[2564]: E0314 00:42:01.226506 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.226634 kubelet[2564]: W0314 00:42:01.226581 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.226634 kubelet[2564]: E0314 00:42:01.226615 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.236966 kubelet[2564]: E0314 00:42:01.231221 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.236966 kubelet[2564]: W0314 00:42:01.231251 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.236966 kubelet[2564]: E0314 00:42:01.231278 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.236966 kubelet[2564]: E0314 00:42:01.236138 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.236966 kubelet[2564]: W0314 00:42:01.236165 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.236966 kubelet[2564]: E0314 00:42:01.236194 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.240711 kubelet[2564]: E0314 00:42:01.240380 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.240711 kubelet[2564]: W0314 00:42:01.240406 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.240711 kubelet[2564]: E0314 00:42:01.240431 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.241431 kubelet[2564]: E0314 00:42:01.241263 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.241431 kubelet[2564]: W0314 00:42:01.241349 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.241431 kubelet[2564]: E0314 00:42:01.241386 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.242003 kubelet[2564]: E0314 00:42:01.241963 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.242003 kubelet[2564]: W0314 00:42:01.241984 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.242003 kubelet[2564]: E0314 00:42:01.242004 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.242946 kubelet[2564]: E0314 00:42:01.242669 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.242946 kubelet[2564]: W0314 00:42:01.242732 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.242946 kubelet[2564]: E0314 00:42:01.242752 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.243755 kubelet[2564]: E0314 00:42:01.243513 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.243755 kubelet[2564]: W0314 00:42:01.243534 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.243755 kubelet[2564]: E0314 00:42:01.243551 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.248285 kubelet[2564]: E0314 00:42:01.246536 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.251942 kubelet[2564]: W0314 00:42:01.251735 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.252075 kubelet[2564]: E0314 00:42:01.251964 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.252075 kubelet[2564]: E0314 00:42:01.253015 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.253695 kubelet[2564]: W0314 00:42:01.253563 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.253695 kubelet[2564]: E0314 00:42:01.253594 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.254438 kubelet[2564]: E0314 00:42:01.254300 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.254438 kubelet[2564]: W0314 00:42:01.254368 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.254438 kubelet[2564]: E0314 00:42:01.254390 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.255924 kubelet[2564]: E0314 00:42:01.255756 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.256241 kubelet[2564]: W0314 00:42:01.255933 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.256241 kubelet[2564]: E0314 00:42:01.256150 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.266057 kubelet[2564]: E0314 00:42:01.263009 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.266057 kubelet[2564]: W0314 00:42:01.263132 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.266057 kubelet[2564]: E0314 00:42:01.263170 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.271344 kubelet[2564]: E0314 00:42:01.269036 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.271344 kubelet[2564]: W0314 00:42:01.269070 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.271344 kubelet[2564]: E0314 00:42:01.269170 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.276459 kubelet[2564]: E0314 00:42:01.276374 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.276459 kubelet[2564]: W0314 00:42:01.276457 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.276619 kubelet[2564]: E0314 00:42:01.276486 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.288339 kubelet[2564]: E0314 00:42:01.284896 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.288339 kubelet[2564]: W0314 00:42:01.284967 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.288339 kubelet[2564]: E0314 00:42:01.285003 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.288339 kubelet[2564]: E0314 00:42:01.287745 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.288339 kubelet[2564]: W0314 00:42:01.287766 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.288339 kubelet[2564]: E0314 00:42:01.287791 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.302600 kubelet[2564]: E0314 00:42:01.302464 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:01.302600 kubelet[2564]: W0314 00:42:01.302602 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:01.302802 kubelet[2564]: E0314 00:42:01.302639 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:01.951160 containerd[1476]: time="2026-03-14T00:42:01.949267642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:01.955897 containerd[1476]: time="2026-03-14T00:42:01.953318812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 14 00:42:01.958891 containerd[1476]: time="2026-03-14T00:42:01.958601693Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:01.964974 containerd[1476]: time="2026-03-14T00:42:01.964725872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:01.967005 containerd[1476]: time="2026-03-14T00:42:01.966666081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.785910102s" Mar 14 00:42:01.967005 containerd[1476]: time="2026-03-14T00:42:01.966754714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 14 00:42:01.981544 containerd[1476]: time="2026-03-14T00:42:01.980023074Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:42:02.023145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742783246.mount: Deactivated successfully. Mar 14 00:42:02.047217 containerd[1476]: time="2026-03-14T00:42:02.047154734Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0\"" Mar 14 00:42:02.056511 containerd[1476]: time="2026-03-14T00:42:02.051541246Z" level=info msg="StartContainer for \"64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0\"" Mar 14 00:42:02.138595 kubelet[2564]: E0314 00:42:02.138558 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:02.149664 systemd[1]: Started cri-containerd-64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0.scope - libcontainer container 64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0. Mar 14 00:42:02.196311 systemd[1]: run-containerd-runc-k8s.io-64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0-runc.1ngYyO.mount: Deactivated successfully. Mar 14 00:42:02.211602 kubelet[2564]: I0314 00:42:02.211015 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-55d48cfd56-qclsc" podStartSLOduration=3.057461845 podStartE2EDuration="7.210995503s" podCreationTimestamp="2026-03-14 00:41:55 +0000 UTC" firstStartedPulling="2026-03-14 00:41:56.02670611 +0000 UTC m=+39.435248098" lastFinishedPulling="2026-03-14 00:42:00.180239769 +0000 UTC m=+43.588781756" observedRunningTime="2026-03-14 00:42:01.291458251 +0000 UTC m=+44.700000239" watchObservedRunningTime="2026-03-14 00:42:02.210995503 +0000 UTC m=+45.619537501" Mar 14 00:42:02.219515 kubelet[2564]: E0314 00:42:02.219223 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.219515 kubelet[2564]: W0314 00:42:02.219256 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.219515 kubelet[2564]: E0314 00:42:02.219285 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.220509 kubelet[2564]: E0314 00:42:02.220140 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.220509 kubelet[2564]: W0314 00:42:02.220161 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.220509 kubelet[2564]: E0314 00:42:02.220182 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.221396 kubelet[2564]: E0314 00:42:02.221376 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.221582 kubelet[2564]: W0314 00:42:02.221478 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.221582 kubelet[2564]: E0314 00:42:02.221504 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.222957 kubelet[2564]: E0314 00:42:02.222673 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.222957 kubelet[2564]: W0314 00:42:02.222692 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.222957 kubelet[2564]: E0314 00:42:02.222710 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.224134 kubelet[2564]: E0314 00:42:02.223940 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.224134 kubelet[2564]: W0314 00:42:02.223957 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.224134 kubelet[2564]: E0314 00:42:02.223973 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.225427 kubelet[2564]: E0314 00:42:02.224691 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.225427 kubelet[2564]: W0314 00:42:02.224897 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.225427 kubelet[2564]: E0314 00:42:02.224913 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.225427 kubelet[2564]: E0314 00:42:02.225347 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.225427 kubelet[2564]: W0314 00:42:02.225359 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.225427 kubelet[2564]: E0314 00:42:02.225371 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.226164 kubelet[2564]: E0314 00:42:02.226014 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.226164 kubelet[2564]: W0314 00:42:02.226094 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.226164 kubelet[2564]: E0314 00:42:02.226106 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.227451 kubelet[2564]: E0314 00:42:02.227301 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.227451 kubelet[2564]: W0314 00:42:02.227350 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.227451 kubelet[2564]: E0314 00:42:02.227363 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.228222 kubelet[2564]: E0314 00:42:02.228145 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.229285 kubelet[2564]: W0314 00:42:02.228310 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.229285 kubelet[2564]: E0314 00:42:02.228470 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.229473 kubelet[2564]: E0314 00:42:02.229417 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.229473 kubelet[2564]: W0314 00:42:02.229427 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.229473 kubelet[2564]: E0314 00:42:02.229437 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.230573 kubelet[2564]: E0314 00:42:02.230192 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.230573 kubelet[2564]: W0314 00:42:02.230250 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.230573 kubelet[2564]: E0314 00:42:02.230264 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.231197 kubelet[2564]: E0314 00:42:02.231015 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.231197 kubelet[2564]: W0314 00:42:02.231032 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.231197 kubelet[2564]: E0314 00:42:02.231046 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.233418 kubelet[2564]: E0314 00:42:02.232468 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.233418 kubelet[2564]: W0314 00:42:02.232529 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.233418 kubelet[2564]: E0314 00:42:02.232546 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.233418 kubelet[2564]: E0314 00:42:02.232938 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.233418 kubelet[2564]: W0314 00:42:02.232952 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.233418 kubelet[2564]: E0314 00:42:02.232966 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.268251 kubelet[2564]: E0314 00:42:02.268039 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.268251 kubelet[2564]: W0314 00:42:02.268169 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.268251 kubelet[2564]: E0314 00:42:02.268202 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.269694 kubelet[2564]: E0314 00:42:02.269606 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.269694 kubelet[2564]: W0314 00:42:02.269630 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.269694 kubelet[2564]: E0314 00:42:02.269648 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.270952 kubelet[2564]: E0314 00:42:02.270746 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.270952 kubelet[2564]: W0314 00:42:02.270800 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.270952 kubelet[2564]: E0314 00:42:02.270902 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.271717 kubelet[2564]: E0314 00:42:02.271662 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.272531 kubelet[2564]: W0314 00:42:02.271756 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.272531 kubelet[2564]: E0314 00:42:02.271900 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.273177 kubelet[2564]: E0314 00:42:02.272947 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.273177 kubelet[2564]: W0314 00:42:02.273008 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.273177 kubelet[2564]: E0314 00:42:02.273026 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.284401 kubelet[2564]: E0314 00:42:02.278915 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.284401 kubelet[2564]: W0314 00:42:02.278969 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.284401 kubelet[2564]: E0314 00:42:02.278993 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.284401 kubelet[2564]: E0314 00:42:02.279572 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.284401 kubelet[2564]: W0314 00:42:02.279586 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.284401 kubelet[2564]: E0314 00:42:02.279601 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.284401 kubelet[2564]: E0314 00:42:02.280452 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.284401 kubelet[2564]: W0314 00:42:02.280471 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.284401 kubelet[2564]: E0314 00:42:02.280491 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.284401 kubelet[2564]: E0314 00:42:02.283964 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.287449 kubelet[2564]: W0314 00:42:02.283980 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.287449 kubelet[2564]: E0314 00:42:02.283999 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.290941 kubelet[2564]: E0314 00:42:02.290781 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.290941 kubelet[2564]: W0314 00:42:02.290932 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.301465 kubelet[2564]: E0314 00:42:02.290964 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.301465 kubelet[2564]: E0314 00:42:02.293497 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.301465 kubelet[2564]: W0314 00:42:02.293518 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.301465 kubelet[2564]: E0314 00:42:02.293540 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.301465 kubelet[2564]: E0314 00:42:02.295301 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.301465 kubelet[2564]: W0314 00:42:02.295328 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.301465 kubelet[2564]: E0314 00:42:02.295356 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.301465 kubelet[2564]: E0314 00:42:02.296148 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.301465 kubelet[2564]: W0314 00:42:02.296166 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.301465 kubelet[2564]: E0314 00:42:02.296182 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.301949 kubelet[2564]: E0314 00:42:02.296720 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.301949 kubelet[2564]: W0314 00:42:02.296730 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.301949 kubelet[2564]: E0314 00:42:02.296740 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.301949 kubelet[2564]: E0314 00:42:02.297777 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.301949 kubelet[2564]: W0314 00:42:02.297793 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.301949 kubelet[2564]: E0314 00:42:02.297989 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.301949 kubelet[2564]: E0314 00:42:02.298960 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.301949 kubelet[2564]: W0314 00:42:02.298973 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.301949 kubelet[2564]: E0314 00:42:02.298988 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.301949 kubelet[2564]: E0314 00:42:02.299638 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.302274 kubelet[2564]: W0314 00:42:02.299659 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.302274 kubelet[2564]: E0314 00:42:02.299685 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.302274 kubelet[2564]: E0314 00:42:02.301134 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:42:02.302274 kubelet[2564]: W0314 00:42:02.301148 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:42:02.302274 kubelet[2564]: E0314 00:42:02.301161 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:42:02.309526 systemd[1]: cri-containerd-64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0.scope: Deactivated successfully. Mar 14 00:42:02.333795 containerd[1476]: time="2026-03-14T00:42:02.333623128Z" level=info msg="StartContainer for \"64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0\" returns successfully" Mar 14 00:42:02.410256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0-rootfs.mount: Deactivated successfully. Mar 14 00:42:02.512120 containerd[1476]: time="2026-03-14T00:42:02.511661298Z" level=info msg="shim disconnected" id=64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0 namespace=k8s.io Mar 14 00:42:02.512120 containerd[1476]: time="2026-03-14T00:42:02.511999301Z" level=warning msg="cleaning up after shim disconnected" id=64cfcd35bd2300c68d2cf8c9fe1af16ce5715fd3223b07beb9a837dce68ebad0 namespace=k8s.io Mar 14 00:42:02.512120 containerd[1476]: time="2026-03-14T00:42:02.512022965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:42:02.984445 kubelet[2564]: E0314 00:42:02.984218 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:03.150193 kubelet[2564]: E0314 00:42:03.149990 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:03.154364 containerd[1476]: time="2026-03-14T00:42:03.153542078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:42:04.985568 kubelet[2564]: E0314 00:42:04.984277 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:06.985026 kubelet[2564]: E0314 00:42:06.984405 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:09.019594 kubelet[2564]: E0314 00:42:09.011248 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:10.990582 kubelet[2564]: E0314 00:42:10.986111 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:11.634174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551219368.mount: Deactivated successfully. Mar 14 00:42:11.795564 containerd[1476]: time="2026-03-14T00:42:11.795250893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:11.796292 containerd[1476]: time="2026-03-14T00:42:11.796175266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 14 00:42:11.798794 containerd[1476]: time="2026-03-14T00:42:11.798631327Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:11.802313 containerd[1476]: time="2026-03-14T00:42:11.802199345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:11.804168 containerd[1476]: time="2026-03-14T00:42:11.803984032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.650386372s" Mar 14 00:42:11.804168 containerd[1476]: time="2026-03-14T00:42:11.804028062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 14 00:42:11.833340 containerd[1476]: time="2026-03-14T00:42:11.833207366Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:42:11.929754 containerd[1476]: time="2026-03-14T00:42:11.927596212Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68\"" Mar 14 00:42:11.930005 containerd[1476]: time="2026-03-14T00:42:11.929241862Z" level=info msg="StartContainer for \"c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68\"" Mar 14 00:42:12.040104 systemd[1]: Started cri-containerd-c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68.scope - libcontainer container c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68. Mar 14 00:42:12.136032 containerd[1476]: time="2026-03-14T00:42:12.135585411Z" level=info msg="StartContainer for \"c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68\" returns successfully" Mar 14 00:42:12.244939 systemd[1]: cri-containerd-c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68.scope: Deactivated successfully. Mar 14 00:42:12.432216 containerd[1476]: time="2026-03-14T00:42:12.432035626Z" level=info msg="shim disconnected" id=c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68 namespace=k8s.io Mar 14 00:42:12.432216 containerd[1476]: time="2026-03-14T00:42:12.432168571Z" level=warning msg="cleaning up after shim disconnected" id=c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68 namespace=k8s.io Mar 14 00:42:12.432216 containerd[1476]: time="2026-03-14T00:42:12.432184601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:42:12.636629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c12286f42afa667eb6a34cf25bc25afaaa48eac1b3949b39779a548bcfc42c68-rootfs.mount: Deactivated successfully. Mar 14 00:42:12.987108 kubelet[2564]: E0314 00:42:12.986333 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:13.343611 containerd[1476]: time="2026-03-14T00:42:13.343132938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:42:14.985246 kubelet[2564]: E0314 00:42:14.985192 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:16.986400 kubelet[2564]: E0314 00:42:16.985402 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:17.211316 containerd[1476]: time="2026-03-14T00:42:17.211190392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:17.215669 containerd[1476]: time="2026-03-14T00:42:17.215602974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 14 00:42:17.231701 containerd[1476]: time="2026-03-14T00:42:17.231543358Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:17.262062 containerd[1476]: time="2026-03-14T00:42:17.261732900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:17.263758 containerd[1476]: time="2026-03-14T00:42:17.263476215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.92029696s" Mar 14 00:42:17.263758 containerd[1476]: time="2026-03-14T00:42:17.263561693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 14 00:42:17.295739 containerd[1476]: time="2026-03-14T00:42:17.295539163Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:42:17.332265 containerd[1476]: time="2026-03-14T00:42:17.332130195Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3\"" Mar 14 00:42:17.333800 containerd[1476]: time="2026-03-14T00:42:17.333106771Z" level=info msg="StartContainer for \"498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3\"" Mar 14 00:42:17.454699 systemd[1]: Started cri-containerd-498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3.scope - libcontainer container 498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3. Mar 14 00:42:17.525267 containerd[1476]: time="2026-03-14T00:42:17.524942829Z" level=info msg="StartContainer for \"498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3\" returns successfully" Mar 14 00:42:18.664383 systemd[1]: cri-containerd-498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3.scope: Deactivated successfully. Mar 14 00:42:18.665930 systemd[1]: cri-containerd-498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3.scope: Consumed 1.343s CPU time. Mar 14 00:42:18.719398 kubelet[2564]: I0314 00:42:18.719149 2564 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:42:18.725505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3-rootfs.mount: Deactivated successfully. Mar 14 00:42:18.802984 containerd[1476]: time="2026-03-14T00:42:18.802129707Z" level=info msg="shim disconnected" id=498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3 namespace=k8s.io Mar 14 00:42:18.802984 containerd[1476]: time="2026-03-14T00:42:18.802194607Z" level=warning msg="cleaning up after shim disconnected" id=498e7ae3f1dacd32ae029f889c57052e685f3e6cc798bb9ce55f1fe15156c6e3 namespace=k8s.io Mar 14 00:42:18.802984 containerd[1476]: time="2026-03-14T00:42:18.802203994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:42:18.839931 kubelet[2564]: I0314 00:42:18.839325 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc2b7217-6287-4581-a381-8909f4a1c133-calico-apiserver-certs\") pod \"calico-apiserver-6cc9b6b4b7-hm5mx\" (UID: \"bc2b7217-6287-4581-a381-8909f4a1c133\") " pod="calico-system/calico-apiserver-6cc9b6b4b7-hm5mx" Mar 14 00:42:18.839931 kubelet[2564]: I0314 00:42:18.839377 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvgcv\" (UniqueName: \"kubernetes.io/projected/bc2b7217-6287-4581-a381-8909f4a1c133-kube-api-access-hvgcv\") pod \"calico-apiserver-6cc9b6b4b7-hm5mx\" (UID: \"bc2b7217-6287-4581-a381-8909f4a1c133\") " pod="calico-system/calico-apiserver-6cc9b6b4b7-hm5mx" Mar 14 00:42:18.849710 systemd[1]: Created slice kubepods-besteffort-podbc2b7217_6287_4581_a381_8909f4a1c133.slice - libcontainer container kubepods-besteffort-podbc2b7217_6287_4581_a381_8909f4a1c133.slice. Mar 14 00:42:18.872592 systemd[1]: Created slice kubepods-besteffort-podc9976b22_5c21_48c9_8ce0_e8ba67196cf5.slice - libcontainer container kubepods-besteffort-podc9976b22_5c21_48c9_8ce0_e8ba67196cf5.slice. Mar 14 00:42:18.885129 systemd[1]: Created slice kubepods-besteffort-podf9092168_e790_4005_b3c3_2b628a935681.slice - libcontainer container kubepods-besteffort-podf9092168_e790_4005_b3c3_2b628a935681.slice. Mar 14 00:42:18.901015 systemd[1]: Created slice kubepods-besteffort-pod3d73419e_01c4_49cb_a46c_827ae2b5174f.slice - libcontainer container kubepods-besteffort-pod3d73419e_01c4_49cb_a46c_827ae2b5174f.slice. Mar 14 00:42:18.917763 systemd[1]: Created slice kubepods-besteffort-pod75d5f8e8_c2fc_4c89_839e_202eefdf2a66.slice - libcontainer container kubepods-besteffort-pod75d5f8e8_c2fc_4c89_839e_202eefdf2a66.slice. Mar 14 00:42:18.933441 systemd[1]: Created slice kubepods-burstable-pod5c89d384_afe9_4799_8684_82936ec1efea.slice - libcontainer container kubepods-burstable-pod5c89d384_afe9_4799_8684_82936ec1efea.slice. Mar 14 00:42:18.940649 kubelet[2564]: I0314 00:42:18.939623 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d73419e-01c4-49cb-a46c-827ae2b5174f-tigera-ca-bundle\") pod \"calico-kube-controllers-c6bf9555f-grh9k\" (UID: \"3d73419e-01c4-49cb-a46c-827ae2b5174f\") " pod="calico-system/calico-kube-controllers-c6bf9555f-grh9k" Mar 14 00:42:18.940649 kubelet[2564]: I0314 00:42:18.939670 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vccz\" (UniqueName: \"kubernetes.io/projected/3d73419e-01c4-49cb-a46c-827ae2b5174f-kube-api-access-8vccz\") pod \"calico-kube-controllers-c6bf9555f-grh9k\" (UID: \"3d73419e-01c4-49cb-a46c-827ae2b5174f\") " pod="calico-system/calico-kube-controllers-c6bf9555f-grh9k" Mar 14 00:42:18.940649 kubelet[2564]: I0314 00:42:18.939692 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c89d384-afe9-4799-8684-82936ec1efea-config-volume\") pod \"coredns-7d764666f9-7zsnh\" (UID: \"5c89d384-afe9-4799-8684-82936ec1efea\") " pod="kube-system/coredns-7d764666f9-7zsnh" Mar 14 00:42:18.940649 kubelet[2564]: I0314 00:42:18.939712 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkzs\" (UniqueName: \"kubernetes.io/projected/5c89d384-afe9-4799-8684-82936ec1efea-kube-api-access-7nkzs\") pod \"coredns-7d764666f9-7zsnh\" (UID: \"5c89d384-afe9-4799-8684-82936ec1efea\") " pod="kube-system/coredns-7d764666f9-7zsnh" Mar 14 00:42:18.940649 kubelet[2564]: I0314 00:42:18.939761 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b85a316-b0b6-4829-b096-e789a4c3c1b0-config-volume\") pod \"coredns-7d764666f9-d76st\" (UID: \"2b85a316-b0b6-4829-b096-e789a4c3c1b0\") " pod="kube-system/coredns-7d764666f9-d76st" Mar 14 00:42:18.941177 kubelet[2564]: I0314 00:42:18.939887 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtf7d\" (UniqueName: \"kubernetes.io/projected/2b85a316-b0b6-4829-b096-e789a4c3c1b0-kube-api-access-jtf7d\") pod \"coredns-7d764666f9-d76st\" (UID: \"2b85a316-b0b6-4829-b096-e789a4c3c1b0\") " pod="kube-system/coredns-7d764666f9-d76st" Mar 14 00:42:18.941177 kubelet[2564]: I0314 00:42:18.939917 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f9092168-e790-4005-b3c3-2b628a935681-calico-apiserver-certs\") pod \"calico-apiserver-6cc9b6b4b7-5hht7\" (UID: \"f9092168-e790-4005-b3c3-2b628a935681\") " pod="calico-system/calico-apiserver-6cc9b6b4b7-5hht7" Mar 14 00:42:18.941177 kubelet[2564]: I0314 00:42:18.939936 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9976b22-5c21-48c9-8ce0-e8ba67196cf5-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-5srn6\" (UID: \"c9976b22-5c21-48c9-8ce0-e8ba67196cf5\") " pod="calico-system/goldmane-9f7667bb8-5srn6" Mar 14 00:42:18.941328 kubelet[2564]: I0314 00:42:18.939955 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-nginx-config\") pod \"whisker-59546697b8-v628l\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " pod="calico-system/whisker-59546697b8-v628l" Mar 14 00:42:18.941328 kubelet[2564]: I0314 00:42:18.941350 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9zr6\" (UniqueName: \"kubernetes.io/projected/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-kube-api-access-m9zr6\") pod \"whisker-59546697b8-v628l\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " pod="calico-system/whisker-59546697b8-v628l" Mar 14 00:42:18.941557 kubelet[2564]: I0314 00:42:18.941484 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9976b22-5c21-48c9-8ce0-e8ba67196cf5-config\") pod \"goldmane-9f7667bb8-5srn6\" (UID: \"c9976b22-5c21-48c9-8ce0-e8ba67196cf5\") " pod="calico-system/goldmane-9f7667bb8-5srn6" Mar 14 00:42:18.941625 kubelet[2564]: I0314 00:42:18.941563 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnp2v\" (UniqueName: \"kubernetes.io/projected/c9976b22-5c21-48c9-8ce0-e8ba67196cf5-kube-api-access-jnp2v\") pod \"goldmane-9f7667bb8-5srn6\" (UID: \"c9976b22-5c21-48c9-8ce0-e8ba67196cf5\") " pod="calico-system/goldmane-9f7667bb8-5srn6" Mar 14 00:42:18.941625 kubelet[2564]: I0314 00:42:18.941592 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-backend-key-pair\") pod \"whisker-59546697b8-v628l\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " pod="calico-system/whisker-59546697b8-v628l" Mar 14 00:42:18.941625 kubelet[2564]: I0314 00:42:18.941620 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-ca-bundle\") pod \"whisker-59546697b8-v628l\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " pod="calico-system/whisker-59546697b8-v628l" Mar 14 00:42:18.941767 kubelet[2564]: I0314 00:42:18.941645 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt8z7\" (UniqueName: \"kubernetes.io/projected/f9092168-e790-4005-b3c3-2b628a935681-kube-api-access-lt8z7\") pod \"calico-apiserver-6cc9b6b4b7-5hht7\" (UID: \"f9092168-e790-4005-b3c3-2b628a935681\") " pod="calico-system/calico-apiserver-6cc9b6b4b7-5hht7" Mar 14 00:42:18.941767 kubelet[2564]: I0314 00:42:18.941665 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c9976b22-5c21-48c9-8ce0-e8ba67196cf5-goldmane-key-pair\") pod \"goldmane-9f7667bb8-5srn6\" (UID: \"c9976b22-5c21-48c9-8ce0-e8ba67196cf5\") " pod="calico-system/goldmane-9f7667bb8-5srn6" Mar 14 00:42:18.947949 systemd[1]: Created slice kubepods-burstable-pod2b85a316_b0b6_4829_b096_e789a4c3c1b0.slice - libcontainer container kubepods-burstable-pod2b85a316_b0b6_4829_b096_e789a4c3c1b0.slice. Mar 14 00:42:18.998061 systemd[1]: Created slice kubepods-besteffort-podfec3ef51_27dd_462a_9b02_64ae702e6505.slice - libcontainer container kubepods-besteffort-podfec3ef51_27dd_462a_9b02_64ae702e6505.slice. Mar 14 00:42:19.013399 containerd[1476]: time="2026-03-14T00:42:19.013266990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6px7x,Uid:fec3ef51-27dd-462a-9b02-64ae702e6505,Namespace:calico-system,Attempt:0,}" Mar 14 00:42:19.170188 containerd[1476]: time="2026-03-14T00:42:19.167407066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-hm5mx,Uid:bc2b7217-6287-4581-a381-8909f4a1c133,Namespace:calico-system,Attempt:0,}" Mar 14 00:42:19.188166 containerd[1476]: time="2026-03-14T00:42:19.188044407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5srn6,Uid:c9976b22-5c21-48c9-8ce0-e8ba67196cf5,Namespace:calico-system,Attempt:0,}" Mar 14 00:42:19.198307 containerd[1476]: time="2026-03-14T00:42:19.198029343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-5hht7,Uid:f9092168-e790-4005-b3c3-2b628a935681,Namespace:calico-system,Attempt:0,}" Mar 14 00:42:19.219223 containerd[1476]: time="2026-03-14T00:42:19.218993456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bf9555f-grh9k,Uid:3d73419e-01c4-49cb-a46c-827ae2b5174f,Namespace:calico-system,Attempt:0,}" Mar 14 00:42:19.236723 containerd[1476]: time="2026-03-14T00:42:19.236387302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59546697b8-v628l,Uid:75d5f8e8-c2fc-4c89-839e-202eefdf2a66,Namespace:calico-system,Attempt:0,}" Mar 14 00:42:19.249560 kubelet[2564]: E0314 00:42:19.249033 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:19.253963 containerd[1476]: time="2026-03-14T00:42:19.253580741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7zsnh,Uid:5c89d384-afe9-4799-8684-82936ec1efea,Namespace:kube-system,Attempt:0,}" Mar 14 00:42:19.273975 kubelet[2564]: E0314 00:42:19.273456 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:19.275717 containerd[1476]: time="2026-03-14T00:42:19.275675602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d76st,Uid:2b85a316-b0b6-4829-b096-e789a4c3c1b0,Namespace:kube-system,Attempt:0,}" Mar 14 00:42:19.392395 containerd[1476]: time="2026-03-14T00:42:19.392183592Z" level=error msg="Failed to destroy network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.395691 containerd[1476]: time="2026-03-14T00:42:19.393743358Z" level=error msg="encountered an error cleaning up failed sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.395691 containerd[1476]: time="2026-03-14T00:42:19.394931577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6px7x,Uid:fec3ef51-27dd-462a-9b02-64ae702e6505,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.432072 kubelet[2564]: E0314 00:42:19.431241 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.432072 kubelet[2564]: E0314 00:42:19.431325 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6px7x" Mar 14 00:42:19.432072 kubelet[2564]: E0314 00:42:19.431355 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6px7x" Mar 14 00:42:19.432359 kubelet[2564]: E0314 00:42:19.431475 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6px7x_calico-system(fec3ef51-27dd-462a-9b02-64ae702e6505)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6px7x_calico-system(fec3ef51-27dd-462a-9b02-64ae702e6505)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:19.450964 containerd[1476]: time="2026-03-14T00:42:19.450027436Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:42:19.663693 containerd[1476]: time="2026-03-14T00:42:19.663633585Z" level=info msg="CreateContainer within sandbox \"290f56b6bafbedc79931482e3a7b49bfaac6423dce3bc7f492885f6e5af622ea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c7aa132eb281d6761b3b903e28b3379704de6f53efaafe39cf3ac627e60ad8b6\"" Mar 14 00:42:19.664710 containerd[1476]: time="2026-03-14T00:42:19.664672928Z" level=info msg="StartContainer for \"c7aa132eb281d6761b3b903e28b3379704de6f53efaafe39cf3ac627e60ad8b6\"" Mar 14 00:42:19.744044 containerd[1476]: time="2026-03-14T00:42:19.741096998Z" level=error msg="Failed to destroy network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.751947 containerd[1476]: time="2026-03-14T00:42:19.748413381Z" level=error msg="encountered an error cleaning up failed sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.753932 containerd[1476]: time="2026-03-14T00:42:19.753042879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-hm5mx,Uid:bc2b7217-6287-4581-a381-8909f4a1c133,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.753599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f-shm.mount: Deactivated successfully. Mar 14 00:42:19.761652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627-shm.mount: Deactivated successfully. Mar 14 00:42:19.762456 kubelet[2564]: E0314 00:42:19.761934 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.762456 kubelet[2564]: E0314 00:42:19.762006 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc9b6b4b7-hm5mx" Mar 14 00:42:19.762456 kubelet[2564]: E0314 00:42:19.762038 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc9b6b4b7-hm5mx" Mar 14 00:42:19.763139 kubelet[2564]: E0314 00:42:19.762113 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc9b6b4b7-hm5mx_calico-system(bc2b7217-6287-4581-a381-8909f4a1c133)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc9b6b4b7-hm5mx_calico-system(bc2b7217-6287-4581-a381-8909f4a1c133)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cc9b6b4b7-hm5mx" podUID="bc2b7217-6287-4581-a381-8909f4a1c133" Mar 14 00:42:19.836433 systemd[1]: Started cri-containerd-c7aa132eb281d6761b3b903e28b3379704de6f53efaafe39cf3ac627e60ad8b6.scope - libcontainer container c7aa132eb281d6761b3b903e28b3379704de6f53efaafe39cf3ac627e60ad8b6. Mar 14 00:42:19.978935 containerd[1476]: time="2026-03-14T00:42:19.971146268Z" level=error msg="Failed to destroy network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.978935 containerd[1476]: time="2026-03-14T00:42:19.973140759Z" level=error msg="encountered an error cleaning up failed sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.978935 containerd[1476]: time="2026-03-14T00:42:19.973216719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5srn6,Uid:c9976b22-5c21-48c9-8ce0-e8ba67196cf5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.979681 kubelet[2564]: E0314 00:42:19.973541 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:19.979681 kubelet[2564]: E0314 00:42:19.973616 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5srn6" Mar 14 00:42:19.979681 kubelet[2564]: E0314 00:42:19.973641 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5srn6" Mar 14 00:42:19.984727 kubelet[2564]: E0314 00:42:19.973705 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-5srn6_calico-system(c9976b22-5c21-48c9-8ce0-e8ba67196cf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-5srn6_calico-system(c9976b22-5c21-48c9-8ce0-e8ba67196cf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-5srn6" podUID="c9976b22-5c21-48c9-8ce0-e8ba67196cf5" Mar 14 00:42:19.981708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad-shm.mount: Deactivated successfully. Mar 14 00:42:20.073045 containerd[1476]: time="2026-03-14T00:42:20.055042759Z" level=error msg="Failed to destroy network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.073045 containerd[1476]: time="2026-03-14T00:42:20.063372405Z" level=error msg="encountered an error cleaning up failed sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.073045 containerd[1476]: time="2026-03-14T00:42:20.063481888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7zsnh,Uid:5c89d384-afe9-4799-8684-82936ec1efea,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.073283 kubelet[2564]: E0314 00:42:20.064318 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.073283 kubelet[2564]: E0314 00:42:20.064378 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-7zsnh" Mar 14 00:42:20.073283 kubelet[2564]: E0314 00:42:20.064400 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-7zsnh" Mar 14 00:42:20.073420 kubelet[2564]: E0314 00:42:20.064460 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-7zsnh_kube-system(5c89d384-afe9-4799-8684-82936ec1efea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-7zsnh_kube-system(5c89d384-afe9-4799-8684-82936ec1efea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-7zsnh" podUID="5c89d384-afe9-4799-8684-82936ec1efea" Mar 14 00:42:20.078647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21-shm.mount: Deactivated successfully. Mar 14 00:42:20.084520 containerd[1476]: time="2026-03-14T00:42:20.083076952Z" level=error msg="Failed to destroy network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.084520 containerd[1476]: time="2026-03-14T00:42:20.084240325Z" level=error msg="encountered an error cleaning up failed sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.084520 containerd[1476]: time="2026-03-14T00:42:20.084309282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d76st,Uid:2b85a316-b0b6-4829-b096-e789a4c3c1b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.086124 kubelet[2564]: E0314 00:42:20.085567 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.086124 kubelet[2564]: E0314 00:42:20.085695 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-d76st" Mar 14 00:42:20.086124 kubelet[2564]: E0314 00:42:20.085723 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-d76st" Mar 14 00:42:20.086329 kubelet[2564]: E0314 00:42:20.085926 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-d76st_kube-system(2b85a316-b0b6-4829-b096-e789a4c3c1b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-d76st_kube-system(2b85a316-b0b6-4829-b096-e789a4c3c1b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-d76st" podUID="2b85a316-b0b6-4829-b096-e789a4c3c1b0" Mar 14 00:42:20.087373 containerd[1476]: time="2026-03-14T00:42:20.087248423Z" level=info msg="StartContainer for \"c7aa132eb281d6761b3b903e28b3379704de6f53efaafe39cf3ac627e60ad8b6\" returns successfully" Mar 14 00:42:20.103286 containerd[1476]: time="2026-03-14T00:42:20.102238644Z" level=error msg="Failed to destroy network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.108184 containerd[1476]: time="2026-03-14T00:42:20.104757265Z" level=error msg="encountered an error cleaning up failed sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.108184 containerd[1476]: time="2026-03-14T00:42:20.107950907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-5hht7,Uid:f9092168-e790-4005-b3c3-2b628a935681,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.109459 kubelet[2564]: E0314 00:42:20.109029 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.109459 kubelet[2564]: E0314 00:42:20.109077 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc9b6b4b7-5hht7" Mar 14 00:42:20.109459 kubelet[2564]: E0314 00:42:20.109100 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cc9b6b4b7-5hht7" Mar 14 00:42:20.109679 kubelet[2564]: E0314 00:42:20.109144 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc9b6b4b7-5hht7_calico-system(f9092168-e790-4005-b3c3-2b628a935681)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc9b6b4b7-5hht7_calico-system(f9092168-e790-4005-b3c3-2b628a935681)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cc9b6b4b7-5hht7" podUID="f9092168-e790-4005-b3c3-2b628a935681" Mar 14 00:42:20.120993 containerd[1476]: time="2026-03-14T00:42:20.117509227Z" level=error msg="Failed to destroy network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.124967 containerd[1476]: time="2026-03-14T00:42:20.124665192Z" level=error msg="encountered an error cleaning up failed sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.124967 containerd[1476]: time="2026-03-14T00:42:20.124928499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59546697b8-v628l,Uid:75d5f8e8-c2fc-4c89-839e-202eefdf2a66,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.130416 kubelet[2564]: E0314 00:42:20.125548 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.130416 kubelet[2564]: E0314 00:42:20.125605 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59546697b8-v628l" Mar 14 00:42:20.130416 kubelet[2564]: E0314 00:42:20.125623 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59546697b8-v628l" Mar 14 00:42:20.130707 kubelet[2564]: E0314 00:42:20.125719 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59546697b8-v628l_calico-system(75d5f8e8-c2fc-4c89-839e-202eefdf2a66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59546697b8-v628l_calico-system(75d5f8e8-c2fc-4c89-839e-202eefdf2a66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59546697b8-v628l" podUID="75d5f8e8-c2fc-4c89-839e-202eefdf2a66" Mar 14 00:42:20.131348 containerd[1476]: time="2026-03-14T00:42:20.130419262Z" level=error msg="Failed to destroy network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.134278 containerd[1476]: time="2026-03-14T00:42:20.132179198Z" level=error msg="encountered an error cleaning up failed sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.134278 containerd[1476]: time="2026-03-14T00:42:20.132280547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bf9555f-grh9k,Uid:3d73419e-01c4-49cb-a46c-827ae2b5174f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.134595 kubelet[2564]: E0314 00:42:20.133265 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.134595 kubelet[2564]: E0314 00:42:20.133321 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6bf9555f-grh9k" Mar 14 00:42:20.134595 kubelet[2564]: E0314 00:42:20.133347 2564 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6bf9555f-grh9k" Mar 14 00:42:20.134748 kubelet[2564]: E0314 00:42:20.133404 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c6bf9555f-grh9k_calico-system(3d73419e-01c4-49cb-a46c-827ae2b5174f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c6bf9555f-grh9k_calico-system(3d73419e-01c4-49cb-a46c-827ae2b5174f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6bf9555f-grh9k" podUID="3d73419e-01c4-49cb-a46c-827ae2b5174f" Mar 14 00:42:20.405269 kubelet[2564]: I0314 00:42:20.405042 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:42:20.417285 kubelet[2564]: I0314 00:42:20.416566 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:42:20.417903 containerd[1476]: time="2026-03-14T00:42:20.416991176Z" level=info msg="StopPodSandbox for \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\"" Mar 14 00:42:20.418474 containerd[1476]: time="2026-03-14T00:42:20.417744093Z" level=info msg="StopPodSandbox for \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\"" Mar 14 00:42:20.419589 containerd[1476]: time="2026-03-14T00:42:20.419428648Z" level=info msg="Ensure that sandbox 54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f in task-service has been cleanup successfully" Mar 14 00:42:20.420215 containerd[1476]: time="2026-03-14T00:42:20.419429583Z" level=info msg="Ensure that sandbox 60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad in task-service has been cleanup successfully" Mar 14 00:42:20.430897 kubelet[2564]: I0314 00:42:20.430693 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:42:20.432038 containerd[1476]: time="2026-03-14T00:42:20.431477138Z" level=info msg="StopPodSandbox for \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\"" Mar 14 00:42:20.432038 containerd[1476]: time="2026-03-14T00:42:20.431716251Z" level=info msg="Ensure that sandbox 6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627 in task-service has been cleanup successfully" Mar 14 00:42:20.473547 kubelet[2564]: I0314 00:42:20.473379 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:42:20.496366 containerd[1476]: time="2026-03-14T00:42:20.494343732Z" level=info msg="StopPodSandbox for \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\"" Mar 14 00:42:20.496366 containerd[1476]: time="2026-03-14T00:42:20.494513064Z" level=info msg="Ensure that sandbox b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21 in task-service has been cleanup successfully" Mar 14 00:42:20.507921 kubelet[2564]: I0314 00:42:20.507350 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:42:20.512431 containerd[1476]: time="2026-03-14T00:42:20.511221559Z" level=info msg="StopPodSandbox for \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\"" Mar 14 00:42:20.515548 containerd[1476]: time="2026-03-14T00:42:20.515078428Z" level=info msg="Ensure that sandbox 91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472 in task-service has been cleanup successfully" Mar 14 00:42:20.523335 kubelet[2564]: I0314 00:42:20.522756 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:42:20.528262 containerd[1476]: time="2026-03-14T00:42:20.527989017Z" level=info msg="StopPodSandbox for \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\"" Mar 14 00:42:20.528376 containerd[1476]: time="2026-03-14T00:42:20.528279686Z" level=info msg="Ensure that sandbox 6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666 in task-service has been cleanup successfully" Mar 14 00:42:20.529112 kubelet[2564]: I0314 00:42:20.529049 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:42:20.536710 containerd[1476]: time="2026-03-14T00:42:20.536367457Z" level=info msg="StopPodSandbox for \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\"" Mar 14 00:42:20.540584 kubelet[2564]: I0314 00:42:20.540347 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-kzvpw" podStartSLOduration=2.296571429 podStartE2EDuration="25.540330439s" podCreationTimestamp="2026-03-14 00:41:55 +0000 UTC" firstStartedPulling="2026-03-14 00:41:56.159658866 +0000 UTC m=+39.568200865" lastFinishedPulling="2026-03-14 00:42:19.403417887 +0000 UTC m=+62.811959875" observedRunningTime="2026-03-14 00:42:20.532603128 +0000 UTC m=+63.941145115" watchObservedRunningTime="2026-03-14 00:42:20.540330439 +0000 UTC m=+63.948872447" Mar 14 00:42:20.546094 containerd[1476]: time="2026-03-14T00:42:20.545303356Z" level=info msg="Ensure that sandbox 3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f in task-service has been cleanup successfully" Mar 14 00:42:20.552612 kubelet[2564]: I0314 00:42:20.552381 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:42:20.568439 containerd[1476]: time="2026-03-14T00:42:20.568385875Z" level=info msg="StopPodSandbox for \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\"" Mar 14 00:42:20.570252 containerd[1476]: time="2026-03-14T00:42:20.570216183Z" level=info msg="Ensure that sandbox fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf in task-service has been cleanup successfully" Mar 14 00:42:20.724647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf-shm.mount: Deactivated successfully. Mar 14 00:42:20.725881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f-shm.mount: Deactivated successfully. Mar 14 00:42:20.725999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472-shm.mount: Deactivated successfully. Mar 14 00:42:20.726164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666-shm.mount: Deactivated successfully. Mar 14 00:42:20.730070 containerd[1476]: time="2026-03-14T00:42:20.729945691Z" level=error msg="StopPodSandbox for \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\" failed" error="failed to destroy network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.730732 kubelet[2564]: E0314 00:42:20.730602 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:42:20.731102 kubelet[2564]: E0314 00:42:20.730737 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627"} Mar 14 00:42:20.731549 kubelet[2564]: E0314 00:42:20.731433 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc2b7217-6287-4581-a381-8909f4a1c133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.732392 kubelet[2564]: E0314 00:42:20.731994 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc2b7217-6287-4581-a381-8909f4a1c133\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cc9b6b4b7-hm5mx" podUID="bc2b7217-6287-4581-a381-8909f4a1c133" Mar 14 00:42:20.748096 containerd[1476]: time="2026-03-14T00:42:20.747719658Z" level=error msg="StopPodSandbox for \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\" failed" error="failed to destroy network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.748989 kubelet[2564]: E0314 00:42:20.748643 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:42:20.748989 kubelet[2564]: E0314 00:42:20.748927 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad"} Mar 14 00:42:20.749558 kubelet[2564]: E0314 00:42:20.749278 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9976b22-5c21-48c9-8ce0-e8ba67196cf5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.749558 kubelet[2564]: E0314 00:42:20.749377 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9976b22-5c21-48c9-8ce0-e8ba67196cf5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-5srn6" podUID="c9976b22-5c21-48c9-8ce0-e8ba67196cf5" Mar 14 00:42:20.776089 containerd[1476]: time="2026-03-14T00:42:20.775736170Z" level=error msg="StopPodSandbox for \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\" failed" error="failed to destroy network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.776702 kubelet[2564]: E0314 00:42:20.776595 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:42:20.776702 kubelet[2564]: E0314 00:42:20.776663 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f"} Mar 14 00:42:20.777385 kubelet[2564]: E0314 00:42:20.776712 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.777385 kubelet[2564]: E0314 00:42:20.776751 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59546697b8-v628l" podUID="75d5f8e8-c2fc-4c89-839e-202eefdf2a66" Mar 14 00:42:20.784023 containerd[1476]: time="2026-03-14T00:42:20.783484664Z" level=error msg="StopPodSandbox for \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\" failed" error="failed to destroy network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.784140 kubelet[2564]: E0314 00:42:20.783973 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:42:20.784140 kubelet[2564]: E0314 00:42:20.784026 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666"} Mar 14 00:42:20.784140 kubelet[2564]: E0314 00:42:20.784070 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9092168-e790-4005-b3c3-2b628a935681\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.784140 kubelet[2564]: E0314 00:42:20.784106 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9092168-e790-4005-b3c3-2b628a935681\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cc9b6b4b7-5hht7" podUID="f9092168-e790-4005-b3c3-2b628a935681" Mar 14 00:42:20.811515 containerd[1476]: time="2026-03-14T00:42:20.810735308Z" level=error msg="StopPodSandbox for \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\" failed" error="failed to destroy network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.811703 kubelet[2564]: E0314 00:42:20.811261 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:42:20.811703 kubelet[2564]: E0314 00:42:20.811323 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf"} Mar 14 00:42:20.811703 kubelet[2564]: E0314 00:42:20.811375 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b85a316-b0b6-4829-b096-e789a4c3c1b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.811703 kubelet[2564]: E0314 00:42:20.811413 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b85a316-b0b6-4829-b096-e789a4c3c1b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-d76st" podUID="2b85a316-b0b6-4829-b096-e789a4c3c1b0" Mar 14 00:42:20.831922 containerd[1476]: time="2026-03-14T00:42:20.831133274Z" level=error msg="StopPodSandbox for \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\" failed" error="failed to destroy network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.832109 kubelet[2564]: E0314 00:42:20.831696 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:42:20.832109 kubelet[2564]: E0314 00:42:20.831906 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472"} Mar 14 00:42:20.832109 kubelet[2564]: E0314 00:42:20.831953 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d73419e-01c4-49cb-a46c-827ae2b5174f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.832109 kubelet[2564]: E0314 00:42:20.832002 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d73419e-01c4-49cb-a46c-827ae2b5174f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6bf9555f-grh9k" podUID="3d73419e-01c4-49cb-a46c-827ae2b5174f" Mar 14 00:42:20.838523 containerd[1476]: time="2026-03-14T00:42:20.838361509Z" level=error msg="StopPodSandbox for \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\" failed" error="failed to destroy network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.840931 kubelet[2564]: E0314 00:42:20.839124 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:42:20.840931 kubelet[2564]: E0314 00:42:20.839261 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21"} Mar 14 00:42:20.840931 kubelet[2564]: E0314 00:42:20.839602 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c89d384-afe9-4799-8684-82936ec1efea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.840931 kubelet[2564]: E0314 00:42:20.839955 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c89d384-afe9-4799-8684-82936ec1efea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-7zsnh" podUID="5c89d384-afe9-4799-8684-82936ec1efea" Mar 14 00:42:20.863403 containerd[1476]: time="2026-03-14T00:42:20.862739175Z" level=error msg="StopPodSandbox for \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\" failed" error="failed to destroy network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:42:20.864668 kubelet[2564]: E0314 00:42:20.864412 2564 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:42:20.864668 kubelet[2564]: E0314 00:42:20.864504 2564 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f"} Mar 14 00:42:20.864668 kubelet[2564]: E0314 00:42:20.864548 2564 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fec3ef51-27dd-462a-9b02-64ae702e6505\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:42:20.864668 kubelet[2564]: E0314 00:42:20.864587 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fec3ef51-27dd-462a-9b02-64ae702e6505\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6px7x" podUID="fec3ef51-27dd-462a-9b02-64ae702e6505" Mar 14 00:42:21.557955 containerd[1476]: time="2026-03-14T00:42:21.557902944Z" level=info msg="StopPodSandbox for \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\"" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.768 [INFO][3969] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.769 [INFO][3969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" iface="eth0" netns="/var/run/netns/cni-12a8b43a-0ea0-d892-98eb-d5075cb453ee" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.769 [INFO][3969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" iface="eth0" netns="/var/run/netns/cni-12a8b43a-0ea0-d892-98eb-d5075cb453ee" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.771 [INFO][3969] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" iface="eth0" netns="/var/run/netns/cni-12a8b43a-0ea0-d892-98eb-d5075cb453ee" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.771 [INFO][3969] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.772 [INFO][3969] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.824 [INFO][3992] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.824 [INFO][3992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.824 [INFO][3992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.837 [WARNING][3992] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.838 [INFO][3992] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.843 [INFO][3992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:21.857444 containerd[1476]: 2026-03-14 00:42:21.850 [INFO][3969] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:42:21.857444 containerd[1476]: time="2026-03-14T00:42:21.856246186Z" level=info msg="TearDown network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\" successfully" Mar 14 00:42:21.857444 containerd[1476]: time="2026-03-14T00:42:21.856283825Z" level=info msg="StopPodSandbox for \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\" returns successfully" Mar 14 00:42:21.861214 systemd[1]: run-netns-cni\x2d12a8b43a\x2d0ea0\x2dd892\x2d98eb\x2dd5075cb453ee.mount: Deactivated successfully. Mar 14 00:42:21.986165 kubelet[2564]: I0314 00:42:21.986060 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-backend-key-pair\") pod \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " Mar 14 00:42:21.987063 kubelet[2564]: I0314 00:42:21.986196 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-kube-api-access-m9zr6\" (UniqueName: \"kubernetes.io/projected/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-kube-api-access-m9zr6\") pod \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " Mar 14 00:42:21.987063 kubelet[2564]: I0314 00:42:21.986228 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-nginx-config\" (UniqueName: \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-nginx-config\") pod \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " Mar 14 00:42:21.987063 kubelet[2564]: I0314 00:42:21.986260 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-ca-bundle\") pod \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\" (UID: \"75d5f8e8-c2fc-4c89-839e-202eefdf2a66\") " Mar 14 00:42:21.987973 kubelet[2564]: I0314 00:42:21.987652 2564 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-ca-bundle" pod "75d5f8e8-c2fc-4c89-839e-202eefdf2a66" (UID: "75d5f8e8-c2fc-4c89-839e-202eefdf2a66"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:42:21.988457 kubelet[2564]: I0314 00:42:21.988302 2564 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-nginx-config" pod "75d5f8e8-c2fc-4c89-839e-202eefdf2a66" (UID: "75d5f8e8-c2fc-4c89-839e-202eefdf2a66"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:42:22.002723 systemd[1]: var-lib-kubelet-pods-75d5f8e8\x2dc2fc\x2d4c89\x2d839e\x2d202eefdf2a66-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:42:22.004124 kubelet[2564]: I0314 00:42:22.003151 2564 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-backend-key-pair" pod "75d5f8e8-c2fc-4c89-839e-202eefdf2a66" (UID: "75d5f8e8-c2fc-4c89-839e-202eefdf2a66"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:42:22.009572 systemd[1]: var-lib-kubelet-pods-75d5f8e8\x2dc2fc\x2d4c89\x2d839e\x2d202eefdf2a66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm9zr6.mount: Deactivated successfully. Mar 14 00:42:22.010063 kubelet[2564]: I0314 00:42:22.009669 2564 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-kube-api-access-m9zr6" pod "75d5f8e8-c2fc-4c89-839e-202eefdf2a66" (UID: "75d5f8e8-c2fc-4c89-839e-202eefdf2a66"). InnerVolumeSpecName "kube-api-access-m9zr6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:42:22.089596 kubelet[2564]: I0314 00:42:22.089277 2564 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 14 00:42:22.089596 kubelet[2564]: I0314 00:42:22.089517 2564 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 14 00:42:22.089596 kubelet[2564]: I0314 00:42:22.089536 2564 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m9zr6\" (UniqueName: \"kubernetes.io/projected/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-kube-api-access-m9zr6\") on node \"localhost\" DevicePath \"\"" Mar 14 00:42:22.089596 kubelet[2564]: I0314 00:42:22.089553 2564 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/75d5f8e8-c2fc-4c89-839e-202eefdf2a66-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 14 00:42:22.624996 systemd[1]: Removed slice kubepods-besteffort-pod75d5f8e8_c2fc_4c89_839e_202eefdf2a66.slice - libcontainer container kubepods-besteffort-pod75d5f8e8_c2fc_4c89_839e_202eefdf2a66.slice. Mar 14 00:42:22.841623 systemd[1]: Created slice kubepods-besteffort-pod0c9df5e1_cfb0_4b16_a1db_a622e20a49fa.slice - libcontainer container kubepods-besteffort-pod0c9df5e1_cfb0_4b16_a1db_a622e20a49fa.slice. Mar 14 00:42:22.933964 kubelet[2564]: I0314 00:42:22.933490 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7dd\" (UniqueName: \"kubernetes.io/projected/0c9df5e1-cfb0-4b16-a1db-a622e20a49fa-kube-api-access-zl7dd\") pod \"whisker-8495b656d5-krk9f\" (UID: \"0c9df5e1-cfb0-4b16-a1db-a622e20a49fa\") " pod="calico-system/whisker-8495b656d5-krk9f" Mar 14 00:42:22.934561 kubelet[2564]: I0314 00:42:22.934365 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0c9df5e1-cfb0-4b16-a1db-a622e20a49fa-nginx-config\") pod \"whisker-8495b656d5-krk9f\" (UID: \"0c9df5e1-cfb0-4b16-a1db-a622e20a49fa\") " pod="calico-system/whisker-8495b656d5-krk9f" Mar 14 00:42:22.934561 kubelet[2564]: I0314 00:42:22.934418 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c9df5e1-cfb0-4b16-a1db-a622e20a49fa-whisker-ca-bundle\") pod \"whisker-8495b656d5-krk9f\" (UID: \"0c9df5e1-cfb0-4b16-a1db-a622e20a49fa\") " pod="calico-system/whisker-8495b656d5-krk9f" Mar 14 00:42:22.934561 kubelet[2564]: I0314 00:42:22.934462 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c9df5e1-cfb0-4b16-a1db-a622e20a49fa-whisker-backend-key-pair\") pod \"whisker-8495b656d5-krk9f\" (UID: \"0c9df5e1-cfb0-4b16-a1db-a622e20a49fa\") " pod="calico-system/whisker-8495b656d5-krk9f" Mar 14 00:42:23.000039 kubelet[2564]: I0314 00:42:22.999711 2564 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="75d5f8e8-c2fc-4c89-839e-202eefdf2a66" path="/var/lib/kubelet/pods/75d5f8e8-c2fc-4c89-839e-202eefdf2a66/volumes" Mar 14 00:42:23.159916 containerd[1476]: time="2026-03-14T00:42:23.159270289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8495b656d5-krk9f,Uid:0c9df5e1-cfb0-4b16-a1db-a622e20a49fa,Namespace:calico-system,Attempt:0,}" Mar 14 00:42:23.584403 systemd-networkd[1379]: calie2cf508a849: Link UP Mar 14 00:42:23.586075 systemd-networkd[1379]: calie2cf508a849: Gained carrier Mar 14 00:42:23.602003 kernel: calico-node[4026]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.281 [ERROR][4133] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.316 [INFO][4133] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8495b656d5--krk9f-eth0 whisker-8495b656d5- calico-system 0c9df5e1-cfb0-4b16-a1db-a622e20a49fa 983 0 2026-03-14 00:42:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8495b656d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8495b656d5-krk9f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2cf508a849 [] [] }} ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.317 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-eth0" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.405 [INFO][4169] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" HandleID="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Workload="localhost-k8s-whisker--8495b656d5--krk9f-eth0" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.431 [INFO][4169] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" HandleID="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Workload="localhost-k8s-whisker--8495b656d5--krk9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fb180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8495b656d5-krk9f", "timestamp":"2026-03-14 00:42:23.405122614 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00045f8c0)} Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.432 [INFO][4169] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.432 [INFO][4169] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.433 [INFO][4169] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.447 [INFO][4169] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.463 [INFO][4169] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.480 [INFO][4169] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.484 [INFO][4169] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.490 [INFO][4169] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.490 [INFO][4169] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.495 [INFO][4169] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428 Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.505 [INFO][4169] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.526 [INFO][4169] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.526 [INFO][4169] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" host="localhost" Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.527 [INFO][4169] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:23.649054 containerd[1476]: 2026-03-14 00:42:23.527 [INFO][4169] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" HandleID="k8s-pod-network.3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Workload="localhost-k8s-whisker--8495b656d5--krk9f-eth0" Mar 14 00:42:23.651061 containerd[1476]: 2026-03-14 00:42:23.542 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8495b656d5--krk9f-eth0", GenerateName:"whisker-8495b656d5-", Namespace:"calico-system", SelfLink:"", UID:"0c9df5e1-cfb0-4b16-a1db-a622e20a49fa", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8495b656d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8495b656d5-krk9f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2cf508a849", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:23.651061 containerd[1476]: 2026-03-14 00:42:23.542 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-eth0" Mar 14 00:42:23.651061 containerd[1476]: 2026-03-14 00:42:23.542 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2cf508a849 ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-eth0" Mar 14 00:42:23.651061 containerd[1476]: 2026-03-14 00:42:23.585 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-eth0" Mar 14 00:42:23.651061 containerd[1476]: 2026-03-14 00:42:23.586 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8495b656d5--krk9f-eth0", GenerateName:"whisker-8495b656d5-", Namespace:"calico-system", SelfLink:"", UID:"0c9df5e1-cfb0-4b16-a1db-a622e20a49fa", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 42, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8495b656d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428", Pod:"whisker-8495b656d5-krk9f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2cf508a849", MAC:"de:9f:d9:df:39:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:23.651061 containerd[1476]: 2026-03-14 00:42:23.638 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428" Namespace="calico-system" Pod="whisker-8495b656d5-krk9f" WorkloadEndpoint="localhost-k8s-whisker--8495b656d5--krk9f-eth0" Mar 14 00:42:23.741195 containerd[1476]: time="2026-03-14T00:42:23.740885138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:23.741195 containerd[1476]: time="2026-03-14T00:42:23.740993378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:23.741195 containerd[1476]: time="2026-03-14T00:42:23.741008717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:23.742075 containerd[1476]: time="2026-03-14T00:42:23.741672005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:23.803372 systemd[1]: Started cri-containerd-3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428.scope - libcontainer container 3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428. Mar 14 00:42:23.905148 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:24.105353 systemd[1]: run-containerd-runc-k8s.io-3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428-runc.72Gmbu.mount: Deactivated successfully. Mar 14 00:42:24.720173 containerd[1476]: time="2026-03-14T00:42:24.720028157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8495b656d5-krk9f,Uid:0c9df5e1-cfb0-4b16-a1db-a622e20a49fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428\"" Mar 14 00:42:24.841193 containerd[1476]: time="2026-03-14T00:42:24.840326410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:42:24.964628 systemd-networkd[1379]: calie2cf508a849: Gained IPv6LL Mar 14 00:42:26.157172 systemd-networkd[1379]: vxlan.calico: Link UP Mar 14 00:42:26.157190 systemd-networkd[1379]: vxlan.calico: Gained carrier Mar 14 00:42:27.278189 containerd[1476]: time="2026-03-14T00:42:27.277683728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:27.281431 containerd[1476]: time="2026-03-14T00:42:27.281375159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 14 00:42:27.287303 containerd[1476]: time="2026-03-14T00:42:27.286615126Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:27.315006 containerd[1476]: time="2026-03-14T00:42:27.314936506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:27.319321 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Mar 14 00:42:27.322389 containerd[1476]: time="2026-03-14T00:42:27.322329699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.481307911s" Mar 14 00:42:27.323343 containerd[1476]: time="2026-03-14T00:42:27.322517857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 14 00:42:27.436575 containerd[1476]: time="2026-03-14T00:42:27.436430320Z" level=info msg="CreateContainer within sandbox \"3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:42:27.634603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678576031.mount: Deactivated successfully. Mar 14 00:42:27.664610 containerd[1476]: time="2026-03-14T00:42:27.664462625Z" level=info msg="CreateContainer within sandbox \"3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"803e5cad1f1e37489a01cdf7590ecef148b216596d2933fbbe1ec6449704236d\"" Mar 14 00:42:27.685409 containerd[1476]: time="2026-03-14T00:42:27.685300800Z" level=info msg="StartContainer for \"803e5cad1f1e37489a01cdf7590ecef148b216596d2933fbbe1ec6449704236d\"" Mar 14 00:42:27.858279 systemd[1]: Started cri-containerd-803e5cad1f1e37489a01cdf7590ecef148b216596d2933fbbe1ec6449704236d.scope - libcontainer container 803e5cad1f1e37489a01cdf7590ecef148b216596d2933fbbe1ec6449704236d. Mar 14 00:42:28.098129 containerd[1476]: time="2026-03-14T00:42:28.095155628Z" level=info msg="StartContainer for \"803e5cad1f1e37489a01cdf7590ecef148b216596d2933fbbe1ec6449704236d\" returns successfully" Mar 14 00:42:28.215365 containerd[1476]: time="2026-03-14T00:42:28.215001277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:42:29.604742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942455417.mount: Deactivated successfully. Mar 14 00:42:29.658651 containerd[1476]: time="2026-03-14T00:42:29.658554859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:29.660467 containerd[1476]: time="2026-03-14T00:42:29.660371758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 14 00:42:29.662969 containerd[1476]: time="2026-03-14T00:42:29.662739808Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:29.669059 containerd[1476]: time="2026-03-14T00:42:29.668967429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:29.670500 containerd[1476]: time="2026-03-14T00:42:29.670060670Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.453719299s" Mar 14 00:42:29.670500 containerd[1476]: time="2026-03-14T00:42:29.670157669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 14 00:42:29.682491 containerd[1476]: time="2026-03-14T00:42:29.682238143Z" level=info msg="CreateContainer within sandbox \"3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:42:29.710302 containerd[1476]: time="2026-03-14T00:42:29.710044670Z" level=info msg="CreateContainer within sandbox \"3d954d97e33e97edb160e631af81a7a66973ffbab6f2e9bdb1d2b1b1be157428\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7f13d68d7800b2a313c472d912eed684e948711c7a154f668dbae7b4e23580d1\"" Mar 14 00:42:29.711390 containerd[1476]: time="2026-03-14T00:42:29.711152143Z" level=info msg="StartContainer for \"7f13d68d7800b2a313c472d912eed684e948711c7a154f668dbae7b4e23580d1\"" Mar 14 00:42:29.782347 systemd[1]: Started cri-containerd-7f13d68d7800b2a313c472d912eed684e948711c7a154f668dbae7b4e23580d1.scope - libcontainer container 7f13d68d7800b2a313c472d912eed684e948711c7a154f668dbae7b4e23580d1. Mar 14 00:42:29.864082 containerd[1476]: time="2026-03-14T00:42:29.862078868Z" level=info msg="StartContainer for \"7f13d68d7800b2a313c472d912eed684e948711c7a154f668dbae7b4e23580d1\" returns successfully" Mar 14 00:42:31.348595 containerd[1476]: time="2026-03-14T00:42:31.348404700Z" level=info msg="StopPodSandbox for \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\"" Mar 14 00:42:32.182337 kubelet[2564]: E0314 00:42:32.179509 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:32.205576 containerd[1476]: time="2026-03-14T00:42:32.205525122Z" level=info msg="StopPodSandbox for \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\"" Mar 14 00:42:32.242162 containerd[1476]: time="2026-03-14T00:42:32.241391210Z" level=info msg="StopPodSandbox for \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\"" Mar 14 00:42:32.495589 kubelet[2564]: I0314 00:42:32.494106 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-8495b656d5-krk9f" podStartSLOduration=5.617391205 podStartE2EDuration="10.493456965s" podCreationTimestamp="2026-03-14 00:42:22 +0000 UTC" firstStartedPulling="2026-03-14 00:42:24.796266721 +0000 UTC m=+68.204808719" lastFinishedPulling="2026-03-14 00:42:29.672332491 +0000 UTC m=+73.080874479" observedRunningTime="2026-03-14 00:42:32.488965539 +0000 UTC m=+75.897507557" watchObservedRunningTime="2026-03-14 00:42:32.493456965 +0000 UTC m=+75.901998984" Mar 14 00:42:33.530768 containerd[1476]: time="2026-03-14T00:42:33.530712023Z" level=info msg="StopPodSandbox for \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\"" Mar 14 00:42:33.571072 containerd[1476]: time="2026-03-14T00:42:33.554142477Z" level=info msg="StopPodSandbox for \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\"" Mar 14 00:42:33.572715 containerd[1476]: time="2026-03-14T00:42:33.572209151Z" level=info msg="StopPodSandbox for \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\"" Mar 14 00:42:36.309465 containerd[1476]: time="2026-03-14T00:42:36.309344892Z" level=info msg="StopPodSandbox for \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\"" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.584 [INFO][4428] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.585 [INFO][4428] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" iface="eth0" netns="/var/run/netns/cni-cd4050c8-27d1-09f9-d45b-e5426537f5be" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.587 [INFO][4428] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" iface="eth0" netns="/var/run/netns/cni-cd4050c8-27d1-09f9-d45b-e5426537f5be" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.588 [INFO][4428] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" iface="eth0" netns="/var/run/netns/cni-cd4050c8-27d1-09f9-d45b-e5426537f5be" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.606 [INFO][4428] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.610 [INFO][4428] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.875 [INFO][4496] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.876 [INFO][4496] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.876 [INFO][4496] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.924 [WARNING][4496] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.924 [INFO][4496] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:33.947 [INFO][4496] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:36.311286 containerd[1476]: 2026-03-14 00:42:34.602 [INFO][4428] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:42:36.317150 containerd[1476]: time="2026-03-14T00:42:36.317097908Z" level=info msg="TearDown network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\" successfully" Mar 14 00:42:36.317276 containerd[1476]: time="2026-03-14T00:42:36.317249660Z" level=info msg="StopPodSandbox for \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\" returns successfully" Mar 14 00:42:36.324724 systemd[1]: run-netns-cni\x2dcd4050c8\x2d27d1\x2d09f9\x2dd45b\x2de5426537f5be.mount: Deactivated successfully. Mar 14 00:42:36.335314 containerd[1476]: time="2026-03-14T00:42:36.335258330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5srn6,Uid:c9976b22-5c21-48c9-8ce0-e8ba67196cf5,Namespace:calico-system,Attempt:1,}" Mar 14 00:42:36.367270 kubelet[2564]: E0314 00:42:36.367133 2564 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.372s" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.543 [INFO][4446] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.543 [INFO][4446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" iface="eth0" netns="/var/run/netns/cni-20821d47-13ae-9028-6b15-3933236cdc55" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.554 [INFO][4446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" iface="eth0" netns="/var/run/netns/cni-20821d47-13ae-9028-6b15-3933236cdc55" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.588 [INFO][4446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" iface="eth0" netns="/var/run/netns/cni-20821d47-13ae-9028-6b15-3933236cdc55" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.588 [INFO][4446] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.588 [INFO][4446] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.881 [INFO][4480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.882 [INFO][4480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:33.955 [INFO][4480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:36.323 [WARNING][4480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:36.323 [INFO][4480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:36.371 [INFO][4480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:36.413008 containerd[1476]: 2026-03-14 00:42:36.392 [INFO][4446] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:42:36.427592 containerd[1476]: time="2026-03-14T00:42:36.419768182Z" level=info msg="TearDown network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\" successfully" Mar 14 00:42:36.427592 containerd[1476]: time="2026-03-14T00:42:36.425222871Z" level=info msg="StopPodSandbox for \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\" returns successfully" Mar 14 00:42:36.427028 systemd[1]: run-netns-cni\x2d20821d47\x2d13ae\x2d9028\x2d6b15\x2d3933236cdc55.mount: Deactivated successfully. Mar 14 00:42:36.495555 containerd[1476]: time="2026-03-14T00:42:36.495235398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-5hht7,Uid:f9092168-e790-4005-b3c3-2b628a935681,Namespace:calico-system,Attempt:1,}" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:33.641 [INFO][4455] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:33.650 [INFO][4455] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" iface="eth0" netns="/var/run/netns/cni-b2fb0eee-bf68-078a-6e4e-7105cae246a6" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:33.675 [INFO][4455] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" iface="eth0" netns="/var/run/netns/cni-b2fb0eee-bf68-078a-6e4e-7105cae246a6" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:33.676 [INFO][4455] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" iface="eth0" netns="/var/run/netns/cni-b2fb0eee-bf68-078a-6e4e-7105cae246a6" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:33.676 [INFO][4455] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:33.676 [INFO][4455] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:36.342 [INFO][4519] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:36.347 [INFO][4519] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:36.376 [INFO][4519] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:36.417 [WARNING][4519] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:36.422 [INFO][4519] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:36.478 [INFO][4519] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:36.516488 containerd[1476]: 2026-03-14 00:42:36.511 [INFO][4455] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:42:36.518334 containerd[1476]: time="2026-03-14T00:42:36.517370495Z" level=info msg="TearDown network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\" successfully" Mar 14 00:42:36.518334 containerd[1476]: time="2026-03-14T00:42:36.517403486Z" level=info msg="StopPodSandbox for \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\" returns successfully" Mar 14 00:42:36.529017 kubelet[2564]: E0314 00:42:36.527520 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:36.528672 systemd[1]: run-netns-cni\x2db2fb0eee\x2dbf68\x2d078a\x2d6e4e\x2d7105cae246a6.mount: Deactivated successfully. Mar 14 00:42:36.530934 containerd[1476]: time="2026-03-14T00:42:36.529792300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d76st,Uid:2b85a316-b0b6-4829-b096-e789a4c3c1b0,Namespace:kube-system,Attempt:1,}" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.298 [INFO][4497] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.299 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" iface="eth0" netns="/var/run/netns/cni-7cb77a7d-e475-a683-3897-44540b57dfbf" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.300 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" iface="eth0" netns="/var/run/netns/cni-7cb77a7d-e475-a683-3897-44540b57dfbf" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.300 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" iface="eth0" netns="/var/run/netns/cni-7cb77a7d-e475-a683-3897-44540b57dfbf" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.300 [INFO][4497] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.300 [INFO][4497] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.541 [INFO][4561] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.563 [INFO][4561] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.563 [INFO][4561] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.613 [WARNING][4561] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.613 [INFO][4561] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.664 [INFO][4561] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:36.705686 containerd[1476]: 2026-03-14 00:42:36.697 [INFO][4497] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:42:36.708129 containerd[1476]: time="2026-03-14T00:42:36.706158272Z" level=info msg="TearDown network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\" successfully" Mar 14 00:42:36.708129 containerd[1476]: time="2026-03-14T00:42:36.706445877Z" level=info msg="StopPodSandbox for \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\" returns successfully" Mar 14 00:42:36.719221 containerd[1476]: time="2026-03-14T00:42:36.719098478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6px7x,Uid:fec3ef51-27dd-462a-9b02-64ae702e6505,Namespace:calico-system,Attempt:1,}" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.414 [INFO][4525] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.415 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" iface="eth0" netns="/var/run/netns/cni-c63255d1-12a0-0b5e-f339-fed2eb6bc59d" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.415 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" iface="eth0" netns="/var/run/netns/cni-c63255d1-12a0-0b5e-f339-fed2eb6bc59d" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.419 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" iface="eth0" netns="/var/run/netns/cni-c63255d1-12a0-0b5e-f339-fed2eb6bc59d" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.419 [INFO][4525] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.419 [INFO][4525] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.677 [INFO][4591] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.677 [INFO][4591] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.677 [INFO][4591] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.699 [WARNING][4591] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.700 [INFO][4591] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.712 [INFO][4591] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:36.730128 containerd[1476]: 2026-03-14 00:42:36.724 [INFO][4525] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:42:36.731437 containerd[1476]: time="2026-03-14T00:42:36.730719703Z" level=info msg="TearDown network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\" successfully" Mar 14 00:42:36.731437 containerd[1476]: time="2026-03-14T00:42:36.730754207Z" level=info msg="StopPodSandbox for \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\" returns successfully" Mar 14 00:42:36.737938 kubelet[2564]: E0314 00:42:36.737736 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:36.739188 containerd[1476]: time="2026-03-14T00:42:36.739088178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7zsnh,Uid:5c89d384-afe9-4799-8684-82936ec1efea,Namespace:kube-system,Attempt:1,}" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.302 [INFO][4506] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.305 [INFO][4506] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" iface="eth0" netns="/var/run/netns/cni-27b8089e-a819-3f3a-f332-5dcc44c3a627" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.306 [INFO][4506] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" iface="eth0" netns="/var/run/netns/cni-27b8089e-a819-3f3a-f332-5dcc44c3a627" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.308 [INFO][4506] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" iface="eth0" netns="/var/run/netns/cni-27b8089e-a819-3f3a-f332-5dcc44c3a627" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.317 [INFO][4506] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.317 [INFO][4506] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.680 [INFO][4582] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.682 [INFO][4582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.712 [INFO][4582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.795 [WARNING][4582] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.795 [INFO][4582] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.860 [INFO][4582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:36.919971 containerd[1476]: 2026-03-14 00:42:36.895 [INFO][4506] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:42:36.921295 containerd[1476]: time="2026-03-14T00:42:36.920152279Z" level=info msg="TearDown network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\" successfully" Mar 14 00:42:36.921295 containerd[1476]: time="2026-03-14T00:42:36.920193144Z" level=info msg="StopPodSandbox for \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\" returns successfully" Mar 14 00:42:36.931361 containerd[1476]: time="2026-03-14T00:42:36.931106020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-hm5mx,Uid:bc2b7217-6287-4581-a381-8909f4a1c133,Namespace:calico-system,Attempt:1,}" Mar 14 00:42:37.322029 systemd-networkd[1379]: cali6d20d2524e4: Link UP Mar 14 00:42:37.322402 systemd-networkd[1379]: cali6d20d2524e4: Gained carrier Mar 14 00:42:37.330943 systemd[1]: run-netns-cni\x2dc63255d1\x2d12a0\x2d0b5e\x2df339\x2dfed2eb6bc59d.mount: Deactivated successfully. Mar 14 00:42:37.331066 systemd[1]: run-netns-cni\x2d27b8089e\x2da819\x2d3f3a\x2df332\x2d5dcc44c3a627.mount: Deactivated successfully. Mar 14 00:42:37.331746 systemd[1]: run-netns-cni\x2d7cb77a7d\x2de475\x2da683\x2d3897\x2d44540b57dfbf.mount: Deactivated successfully. Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:36.861 [INFO][4598] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--5srn6-eth0 goldmane-9f7667bb8- calico-system c9976b22-5c21-48c9-8ce0-e8ba67196cf5 1027 0 2026-03-14 00:41:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-5srn6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d20d2524e4 [] [] }} ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:36.868 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.027 [INFO][4672] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" HandleID="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.055 [INFO][4672] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" HandleID="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a47d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-5srn6", "timestamp":"2026-03-14 00:42:37.027210822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00071a000)} Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.055 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.055 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.055 [INFO][4672] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.069 [INFO][4672] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.095 [INFO][4672] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.159 [INFO][4672] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.171 [INFO][4672] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.187 [INFO][4672] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.187 [INFO][4672] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.204 [INFO][4672] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12 Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.237 [INFO][4672] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.264 [INFO][4672] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.264 [INFO][4672] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" host="localhost" Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.264 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:37.386353 containerd[1476]: 2026-03-14 00:42:37.264 [INFO][4672] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" HandleID="k8s-pod-network.0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:37.388328 containerd[1476]: 2026-03-14 00:42:37.292 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--5srn6-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9976b22-5c21-48c9-8ce0-e8ba67196cf5", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-5srn6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d20d2524e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:37.388328 containerd[1476]: 2026-03-14 00:42:37.292 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:37.388328 containerd[1476]: 2026-03-14 00:42:37.292 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d20d2524e4 ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:37.388328 containerd[1476]: 2026-03-14 00:42:37.323 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:37.388328 containerd[1476]: 2026-03-14 00:42:37.329 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--5srn6-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9976b22-5c21-48c9-8ce0-e8ba67196cf5", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12", Pod:"goldmane-9f7667bb8-5srn6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d20d2524e4", MAC:"2e:d5:46:52:b1:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:37.388328 containerd[1476]: 2026-03-14 00:42:37.362 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12" Namespace="calico-system" Pod="goldmane-9f7667bb8-5srn6" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:42:37.467983 systemd-networkd[1379]: cali99d7a86392b: Link UP Mar 14 00:42:37.473992 systemd-networkd[1379]: cali99d7a86392b: Gained carrier Mar 14 00:42:37.493116 containerd[1476]: time="2026-03-14T00:42:37.492801917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:37.496917 containerd[1476]: time="2026-03-14T00:42:37.494254522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:37.502777 containerd[1476]: time="2026-03-14T00:42:37.502572405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:37.505750 containerd[1476]: time="2026-03-14T00:42:37.503296830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.049 [INFO][4624] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0 calico-apiserver-6cc9b6b4b7- calico-system f9092168-e790-4005-b3c3-2b628a935681 1026 0 2026-03-14 00:41:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc9b6b4b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cc9b6b4b7-5hht7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali99d7a86392b [] [] }} ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.050 [INFO][4624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.246 [INFO][4711] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" HandleID="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.288 [INFO][4711] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" HandleID="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e3a50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6cc9b6b4b7-5hht7", "timestamp":"2026-03-14 00:42:37.245993487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000456c60)} Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.288 [INFO][4711] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.288 [INFO][4711] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.288 [INFO][4711] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.306 [INFO][4711] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.331 [INFO][4711] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.352 [INFO][4711] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.369 [INFO][4711] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.374 [INFO][4711] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.374 [INFO][4711] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.380 [INFO][4711] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.399 [INFO][4711] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.424 [INFO][4711] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.427 [INFO][4711] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" host="localhost" Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.428 [INFO][4711] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:37.542322 containerd[1476]: 2026-03-14 00:42:37.428 [INFO][4711] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" HandleID="k8s-pod-network.49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:37.549357 containerd[1476]: 2026-03-14 00:42:37.439 [INFO][4624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"f9092168-e790-4005-b3c3-2b628a935681", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cc9b6b4b7-5hht7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali99d7a86392b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:37.549357 containerd[1476]: 2026-03-14 00:42:37.440 [INFO][4624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:37.549357 containerd[1476]: 2026-03-14 00:42:37.440 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99d7a86392b ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:37.549357 containerd[1476]: 2026-03-14 00:42:37.481 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:37.549357 containerd[1476]: 2026-03-14 00:42:37.489 [INFO][4624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"f9092168-e790-4005-b3c3-2b628a935681", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d", Pod:"calico-apiserver-6cc9b6b4b7-5hht7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali99d7a86392b", MAC:"c6:19:d7:1b:7a:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:37.549357 containerd[1476]: 2026-03-14 00:42:37.529 [INFO][4624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-5hht7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:42:37.638451 systemd-networkd[1379]: cali1b1475f8735: Link UP Mar 14 00:42:37.640489 systemd-networkd[1379]: cali1b1475f8735: Gained carrier Mar 14 00:42:37.677448 systemd[1]: Started cri-containerd-0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12.scope - libcontainer container 0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12. Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.045 [INFO][4623] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--d76st-eth0 coredns-7d764666f9- kube-system 2b85a316-b0b6-4829-b096-e789a4c3c1b0 1031 0 2026-03-14 00:41:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-d76st eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b1475f8735 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.045 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.214 [INFO][4709] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" HandleID="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.290 [INFO][4709] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" HandleID="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000406b60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-d76st", "timestamp":"2026-03-14 00:42:37.214494181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001994a0)} Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.290 [INFO][4709] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.429 [INFO][4709] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.429 [INFO][4709] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.438 [INFO][4709] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.461 [INFO][4709] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.477 [INFO][4709] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.486 [INFO][4709] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.495 [INFO][4709] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.496 [INFO][4709] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.504 [INFO][4709] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.520 [INFO][4709] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.538 [INFO][4709] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.538 [INFO][4709] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" host="localhost" Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.540 [INFO][4709] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:37.695061 containerd[1476]: 2026-03-14 00:42:37.540 [INFO][4709] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" HandleID="k8s-pod-network.53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:37.697771 containerd[1476]: 2026-03-14 00:42:37.578 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--d76st-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2b85a316-b0b6-4829-b096-e789a4c3c1b0", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-d76st", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b1475f8735", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:37.697771 containerd[1476]: 2026-03-14 00:42:37.578 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:37.697771 containerd[1476]: 2026-03-14 00:42:37.578 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b1475f8735 ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:37.697771 containerd[1476]: 2026-03-14 00:42:37.599 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:37.697771 containerd[1476]: 2026-03-14 00:42:37.609 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--d76st-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2b85a316-b0b6-4829-b096-e789a4c3c1b0", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d", Pod:"coredns-7d764666f9-d76st", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b1475f8735", MAC:"7a:77:7b:fb:e0:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:37.697771 containerd[1476]: 2026-03-14 00:42:37.671 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d" Namespace="kube-system" Pod="coredns-7d764666f9-d76st" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:36.952 [INFO][4583] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:36.953 [INFO][4583] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" iface="eth0" netns="/var/run/netns/cni-3dec63c7-e485-0f73-71b8-3c5f016623c5" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:36.953 [INFO][4583] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" iface="eth0" netns="/var/run/netns/cni-3dec63c7-e485-0f73-71b8-3c5f016623c5" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:36.970 [INFO][4583] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" iface="eth0" netns="/var/run/netns/cni-3dec63c7-e485-0f73-71b8-3c5f016623c5" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:36.971 [INFO][4583] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:36.971 [INFO][4583] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:37.286 [INFO][4684] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:37.310 [INFO][4684] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:37.538 [INFO][4684] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:37.571 [WARNING][4684] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:37.571 [INFO][4684] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:37.585 [INFO][4684] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:37.709424 containerd[1476]: 2026-03-14 00:42:37.663 [INFO][4583] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:42:37.718212 systemd[1]: run-netns-cni\x2d3dec63c7\x2de485\x2d0f73\x2d71b8\x2d3c5f016623c5.mount: Deactivated successfully. Mar 14 00:42:37.751102 containerd[1476]: time="2026-03-14T00:42:37.750092401Z" level=info msg="TearDown network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\" successfully" Mar 14 00:42:37.752921 containerd[1476]: time="2026-03-14T00:42:37.751428811Z" level=info msg="StopPodSandbox for \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\" returns successfully" Mar 14 00:42:37.763948 containerd[1476]: time="2026-03-14T00:42:37.763738522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bf9555f-grh9k,Uid:3d73419e-01c4-49cb-a46c-827ae2b5174f,Namespace:calico-system,Attempt:1,}" Mar 14 00:42:37.778458 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:37.928736 systemd-networkd[1379]: caliaca262809ba: Link UP Mar 14 00:42:38.001418 systemd-networkd[1379]: caliaca262809ba: Gained carrier Mar 14 00:42:38.015780 containerd[1476]: time="2026-03-14T00:42:38.015266574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:38.019900 containerd[1476]: time="2026-03-14T00:42:38.019548649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:38.019900 containerd[1476]: time="2026-03-14T00:42:38.019757447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.026645 containerd[1476]: time="2026-03-14T00:42:38.026000073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.031967 containerd[1476]: time="2026-03-14T00:42:38.028023576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5srn6,Uid:c9976b22-5c21-48c9-8ce0-e8ba67196cf5,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12\"" Mar 14 00:42:38.036679 containerd[1476]: time="2026-03-14T00:42:38.036128285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:38.038088 containerd[1476]: time="2026-03-14T00:42:38.037674585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:38.038088 containerd[1476]: time="2026-03-14T00:42:38.037864898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.055106 containerd[1476]: time="2026-03-14T00:42:38.054142673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:42:38.057019 containerd[1476]: time="2026-03-14T00:42:38.056957807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.194 [INFO][4653] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6px7x-eth0 csi-node-driver- calico-system fec3ef51-27dd-462a-9b02-64ae702e6505 1037 0 2026-03-14 00:41:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6px7x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaca262809ba [] [] }} ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.194 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.295 [INFO][4730] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" HandleID="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.311 [INFO][4730] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" HandleID="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6px7x", "timestamp":"2026-03-14 00:42:37.295117939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003971e0)} Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.322 [INFO][4730] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.586 [INFO][4730] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.586 [INFO][4730] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.655 [INFO][4730] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.683 [INFO][4730] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.717 [INFO][4730] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.765 [INFO][4730] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.788 [INFO][4730] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.788 [INFO][4730] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.800 [INFO][4730] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.841 [INFO][4730] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.878 [INFO][4730] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.878 [INFO][4730] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" host="localhost" Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.878 [INFO][4730] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:38.207466 containerd[1476]: 2026-03-14 00:42:37.878 [INFO][4730] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" HandleID="k8s-pod-network.5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:38.208796 containerd[1476]: 2026-03-14 00:42:37.909 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6px7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fec3ef51-27dd-462a-9b02-64ae702e6505", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6px7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaca262809ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.208796 containerd[1476]: 2026-03-14 00:42:37.915 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:38.208796 containerd[1476]: 2026-03-14 00:42:37.915 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaca262809ba ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:38.208796 containerd[1476]: 2026-03-14 00:42:37.942 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:38.208796 containerd[1476]: 2026-03-14 00:42:38.064 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6px7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fec3ef51-27dd-462a-9b02-64ae702e6505", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd", Pod:"csi-node-driver-6px7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaca262809ba", MAC:"06:77:04:70:c2:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.208796 containerd[1476]: 2026-03-14 00:42:38.095 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd" Namespace="calico-system" Pod="csi-node-driver-6px7x" WorkloadEndpoint="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:42:38.220205 systemd[1]: Started cri-containerd-53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d.scope - libcontainer container 53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d. Mar 14 00:42:38.228478 systemd[1]: Started cri-containerd-49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d.scope - libcontainer container 49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d. Mar 14 00:42:38.280021 systemd-networkd[1379]: calidf922a010e2: Link UP Mar 14 00:42:38.286759 systemd-networkd[1379]: calidf922a010e2: Gained carrier Mar 14 00:42:38.290999 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:38.297796 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:38.363791 containerd[1476]: time="2026-03-14T00:42:38.360732858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:38.363791 containerd[1476]: time="2026-03-14T00:42:38.361039156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:38.363791 containerd[1476]: time="2026-03-14T00:42:38.361069102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.363791 containerd[1476]: time="2026-03-14T00:42:38.361195146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.208 [INFO][4685] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0 calico-apiserver-6cc9b6b4b7- calico-system bc2b7217-6287-4581-a381-8909f4a1c133 1036 0 2026-03-14 00:41:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc9b6b4b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cc9b6b4b7-hm5mx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidf922a010e2 [] [] }} ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.208 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.384 [INFO][4738] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" HandleID="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.412 [INFO][4738] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" HandleID="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6cc9b6b4b7-hm5mx", "timestamp":"2026-03-14 00:42:37.384203703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00054c580)} Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.412 [INFO][4738] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.883 [INFO][4738] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.883 [INFO][4738] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:37.958 [INFO][4738] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.002 [INFO][4738] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.057 [INFO][4738] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.080 [INFO][4738] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.103 [INFO][4738] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.103 [INFO][4738] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.110 [INFO][4738] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445 Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.185 [INFO][4738] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.226 [INFO][4738] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.226 [INFO][4738] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" host="localhost" Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.226 [INFO][4738] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:38.405279 containerd[1476]: 2026-03-14 00:42:38.227 [INFO][4738] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" HandleID="k8s-pod-network.3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:38.407497 containerd[1476]: 2026-03-14 00:42:38.257 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"bc2b7217-6287-4581-a381-8909f4a1c133", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cc9b6b4b7-hm5mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf922a010e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.407497 containerd[1476]: 2026-03-14 00:42:38.259 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:38.407497 containerd[1476]: 2026-03-14 00:42:38.259 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf922a010e2 ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:38.407497 containerd[1476]: 2026-03-14 00:42:38.307 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:38.407497 containerd[1476]: 2026-03-14 00:42:38.311 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"bc2b7217-6287-4581-a381-8909f4a1c133", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445", Pod:"calico-apiserver-6cc9b6b4b7-hm5mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf922a010e2", MAC:"06:28:cf:03:a6:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.407497 containerd[1476]: 2026-03-14 00:42:38.363 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445" Namespace="calico-system" Pod="calico-apiserver-6cc9b6b4b7-hm5mx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:42:38.436258 systemd[1]: Started cri-containerd-5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd.scope - libcontainer container 5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd. Mar 14 00:42:38.456768 containerd[1476]: time="2026-03-14T00:42:38.456678355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-d76st,Uid:2b85a316-b0b6-4829-b096-e789a4c3c1b0,Namespace:kube-system,Attempt:1,} returns sandbox id \"53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d\"" Mar 14 00:42:38.468709 kubelet[2564]: E0314 00:42:38.468461 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:38.495734 containerd[1476]: time="2026-03-14T00:42:38.495691719Z" level=info msg="CreateContainer within sandbox \"53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:42:38.513150 containerd[1476]: time="2026-03-14T00:42:38.513096276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-5hht7,Uid:f9092168-e790-4005-b3c3-2b628a935681,Namespace:calico-system,Attempt:1,} returns sandbox id \"49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d\"" Mar 14 00:42:38.531185 systemd-networkd[1379]: calif02273d99b3: Link UP Mar 14 00:42:38.533331 systemd-networkd[1379]: calif02273d99b3: Gained carrier Mar 14 00:42:38.569691 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:38.640071 containerd[1476]: time="2026-03-14T00:42:38.638439103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:38.640071 containerd[1476]: time="2026-03-14T00:42:38.638523780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:38.640071 containerd[1476]: time="2026-03-14T00:42:38.638552823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.640071 containerd[1476]: time="2026-03-14T00:42:38.638767222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.646255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52167715.mount: Deactivated successfully. Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:37.184 [INFO][4652] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--7zsnh-eth0 coredns-7d764666f9- kube-system 5c89d384-afe9-4799-8684-82936ec1efea 1042 0 2026-03-14 00:41:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-7zsnh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif02273d99b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:37.185 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:37.433 [INFO][4729] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" HandleID="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:37.453 [INFO][4729] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" HandleID="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000407340), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-7zsnh", "timestamp":"2026-03-14 00:42:37.433033807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000286e0)} Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:37.458 [INFO][4729] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.227 [INFO][4729] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.227 [INFO][4729] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.238 [INFO][4729] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.269 [INFO][4729] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.297 [INFO][4729] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.308 [INFO][4729] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.316 [INFO][4729] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.316 [INFO][4729] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.349 [INFO][4729] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24 Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.388 [INFO][4729] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.492 [INFO][4729] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.493 [INFO][4729] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" host="localhost" Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.493 [INFO][4729] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:38.667187 containerd[1476]: 2026-03-14 00:42:38.494 [INFO][4729] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" HandleID="k8s-pod-network.cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:38.668420 containerd[1476]: 2026-03-14 00:42:38.513 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--7zsnh-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c89d384-afe9-4799-8684-82936ec1efea", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-7zsnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif02273d99b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.668420 containerd[1476]: 2026-03-14 00:42:38.515 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:38.668420 containerd[1476]: 2026-03-14 00:42:38.515 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif02273d99b3 ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:38.668420 containerd[1476]: 2026-03-14 00:42:38.534 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:38.668420 containerd[1476]: 2026-03-14 00:42:38.541 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--7zsnh-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c89d384-afe9-4799-8684-82936ec1efea", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24", Pod:"coredns-7d764666f9-7zsnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif02273d99b3", MAC:"c2:42:01:69:ed:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.668420 containerd[1476]: 2026-03-14 00:42:38.629 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24" Namespace="kube-system" Pod="coredns-7d764666f9-7zsnh" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:42:38.686430 containerd[1476]: time="2026-03-14T00:42:38.686394570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6px7x,Uid:fec3ef51-27dd-462a-9b02-64ae702e6505,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd\"" Mar 14 00:42:38.709552 containerd[1476]: time="2026-03-14T00:42:38.708962617Z" level=info msg="CreateContainer within sandbox \"53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99eb184e4e5405f2decd9e6e4b3c035e396a8e126fa9abc8d7a5b0e200861f3f\"" Mar 14 00:42:38.725726 containerd[1476]: time="2026-03-14T00:42:38.722779998Z" level=info msg="StartContainer for \"99eb184e4e5405f2decd9e6e4b3c035e396a8e126fa9abc8d7a5b0e200861f3f\"" Mar 14 00:42:38.765934 systemd[1]: Started cri-containerd-3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445.scope - libcontainer container 3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445. Mar 14 00:42:38.781061 systemd-networkd[1379]: cali1b1475f8735: Gained IPv6LL Mar 14 00:42:38.800094 systemd-networkd[1379]: calid5d0ccf0978: Link UP Mar 14 00:42:38.801767 systemd-networkd[1379]: calid5d0ccf0978: Gained carrier Mar 14 00:42:38.837908 containerd[1476]: time="2026-03-14T00:42:38.835148395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:38.837908 containerd[1476]: time="2026-03-14T00:42:38.835234435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:38.837908 containerd[1476]: time="2026-03-14T00:42:38.835250074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.837908 containerd[1476]: time="2026-03-14T00:42:38.835376858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:38.839390 systemd-networkd[1379]: cali99d7a86392b: Gained IPv6LL Mar 14 00:42:38.870391 systemd[1]: Started cri-containerd-99eb184e4e5405f2decd9e6e4b3c035e396a8e126fa9abc8d7a5b0e200861f3f.scope - libcontainer container 99eb184e4e5405f2decd9e6e4b3c035e396a8e126fa9abc8d7a5b0e200861f3f. Mar 14 00:42:38.874519 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.118 [INFO][4849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0 calico-kube-controllers-c6bf9555f- calico-system 3d73419e-01c4-49cb-a46c-827ae2b5174f 1046 0 2026-03-14 00:41:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c6bf9555f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c6bf9555f-grh9k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid5d0ccf0978 [] [] }} ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.118 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.495 [INFO][4922] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" HandleID="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.563 [INFO][4922] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" HandleID="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f4240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c6bf9555f-grh9k", "timestamp":"2026-03-14 00:42:38.495711421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000364160)} Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.568 [INFO][4922] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.568 [INFO][4922] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.568 [INFO][4922] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.585 [INFO][4922] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.613 [INFO][4922] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.638 [INFO][4922] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.657 [INFO][4922] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.676 [INFO][4922] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.676 [INFO][4922] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.692 [INFO][4922] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167 Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.723 [INFO][4922] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.762 [INFO][4922] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.762 [INFO][4922] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" host="localhost" Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.763 [INFO][4922] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:42:38.875256 containerd[1476]: 2026-03-14 00:42:38.764 [INFO][4922] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" HandleID="k8s-pod-network.9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:38.877195 containerd[1476]: 2026-03-14 00:42:38.778 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0", GenerateName:"calico-kube-controllers-c6bf9555f-", Namespace:"calico-system", SelfLink:"", UID:"3d73419e-01c4-49cb-a46c-827ae2b5174f", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6bf9555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c6bf9555f-grh9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5d0ccf0978", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.877195 containerd[1476]: 2026-03-14 00:42:38.778 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:38.877195 containerd[1476]: 2026-03-14 00:42:38.778 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5d0ccf0978 ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:38.877195 containerd[1476]: 2026-03-14 00:42:38.816 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:38.877195 containerd[1476]: 2026-03-14 00:42:38.822 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0", GenerateName:"calico-kube-controllers-c6bf9555f-", Namespace:"calico-system", SelfLink:"", UID:"3d73419e-01c4-49cb-a46c-827ae2b5174f", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6bf9555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167", Pod:"calico-kube-controllers-c6bf9555f-grh9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5d0ccf0978", MAC:"ca:77:72:5f:e2:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:42:38.877195 containerd[1476]: 2026-03-14 00:42:38.862 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167" Namespace="calico-system" Pod="calico-kube-controllers-c6bf9555f-grh9k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:42:38.968507 systemd[1]: Started cri-containerd-cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24.scope - libcontainer container cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24. Mar 14 00:42:38.988500 kubelet[2564]: E0314 00:42:38.987393 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:39.025954 containerd[1476]: time="2026-03-14T00:42:39.024234040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc9b6b4b7-hm5mx,Uid:bc2b7217-6287-4581-a381-8909f4a1c133,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445\"" Mar 14 00:42:39.031530 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:39.031644 systemd-networkd[1379]: caliaca262809ba: Gained IPv6LL Mar 14 00:42:39.070286 containerd[1476]: time="2026-03-14T00:42:39.057429178Z" level=info msg="StartContainer for \"99eb184e4e5405f2decd9e6e4b3c035e396a8e126fa9abc8d7a5b0e200861f3f\" returns successfully" Mar 14 00:42:39.096054 systemd-networkd[1379]: cali6d20d2524e4: Gained IPv6LL Mar 14 00:42:39.150928 containerd[1476]: time="2026-03-14T00:42:39.148767081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:42:39.150928 containerd[1476]: time="2026-03-14T00:42:39.148964607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:42:39.150928 containerd[1476]: time="2026-03-14T00:42:39.148986779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:39.150928 containerd[1476]: time="2026-03-14T00:42:39.149119104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:42:39.180773 containerd[1476]: time="2026-03-14T00:42:39.180569760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-7zsnh,Uid:5c89d384-afe9-4799-8684-82936ec1efea,Namespace:kube-system,Attempt:1,} returns sandbox id \"cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24\"" Mar 14 00:42:39.183104 kubelet[2564]: E0314 00:42:39.182790 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:39.200209 containerd[1476]: time="2026-03-14T00:42:39.199776716Z" level=info msg="CreateContainer within sandbox \"cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:42:39.215461 systemd[1]: Started cri-containerd-9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167.scope - libcontainer container 9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167. Mar 14 00:42:39.268973 containerd[1476]: time="2026-03-14T00:42:39.268447565Z" level=info msg="CreateContainer within sandbox \"cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20e485583b1c8bbba71b92c5f15aad8d15dfdd0293d3ed08e304a2234eaa6300\"" Mar 14 00:42:39.274958 containerd[1476]: time="2026-03-14T00:42:39.274757645Z" level=info msg="StartContainer for \"20e485583b1c8bbba71b92c5f15aad8d15dfdd0293d3ed08e304a2234eaa6300\"" Mar 14 00:42:39.308115 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:42:39.437440 systemd[1]: Started cri-containerd-20e485583b1c8bbba71b92c5f15aad8d15dfdd0293d3ed08e304a2234eaa6300.scope - libcontainer container 20e485583b1c8bbba71b92c5f15aad8d15dfdd0293d3ed08e304a2234eaa6300. Mar 14 00:42:39.456065 containerd[1476]: time="2026-03-14T00:42:39.455301916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6bf9555f-grh9k,Uid:3d73419e-01c4-49cb-a46c-827ae2b5174f,Namespace:calico-system,Attempt:1,} returns sandbox id \"9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167\"" Mar 14 00:42:39.467924 kubelet[2564]: E0314 00:42:39.465742 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:39.516116 kubelet[2564]: I0314 00:42:39.515369 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-d76st" podStartSLOduration=79.515346959 podStartE2EDuration="1m19.515346959s" podCreationTimestamp="2026-03-14 00:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:42:39.513139441 +0000 UTC m=+82.921681429" watchObservedRunningTime="2026-03-14 00:42:39.515346959 +0000 UTC m=+82.923888957" Mar 14 00:42:39.585784 containerd[1476]: time="2026-03-14T00:42:39.584954887Z" level=info msg="StartContainer for \"20e485583b1c8bbba71b92c5f15aad8d15dfdd0293d3ed08e304a2234eaa6300\" returns successfully" Mar 14 00:42:39.609575 systemd-networkd[1379]: calif02273d99b3: Gained IPv6LL Mar 14 00:42:40.247544 systemd-networkd[1379]: calidf922a010e2: Gained IPv6LL Mar 14 00:42:40.474050 kubelet[2564]: E0314 00:42:40.473048 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:40.483260 kubelet[2564]: E0314 00:42:40.483064 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:40.515980 kubelet[2564]: I0314 00:42:40.515273 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-7zsnh" podStartSLOduration=79.515251783 podStartE2EDuration="1m19.515251783s" podCreationTimestamp="2026-03-14 00:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:42:40.513012597 +0000 UTC m=+83.921554584" watchObservedRunningTime="2026-03-14 00:42:40.515251783 +0000 UTC m=+83.923793781" Mar 14 00:42:40.823531 systemd-networkd[1379]: calid5d0ccf0978: Gained IPv6LL Mar 14 00:42:40.840127 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:39388.service - OpenSSH per-connection server daemon (10.0.0.1:39388). Mar 14 00:42:41.006121 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 39388 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:42:41.010126 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:42:41.024395 systemd-logind[1459]: New session 8 of user core. Mar 14 00:42:41.034297 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:42:41.088027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964996420.mount: Deactivated successfully. Mar 14 00:42:41.365470 sshd[5288]: pam_unix(sshd:session): session closed for user core Mar 14 00:42:41.376037 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:39388.service: Deactivated successfully. Mar 14 00:42:41.383216 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:42:41.390070 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:42:41.395636 systemd-logind[1459]: Removed session 8. Mar 14 00:42:41.494666 kubelet[2564]: E0314 00:42:41.493950 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:41.494666 kubelet[2564]: E0314 00:42:41.494367 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:42.450534 containerd[1476]: time="2026-03-14T00:42:42.450353378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:42.457752 containerd[1476]: time="2026-03-14T00:42:42.457532876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 14 00:42:42.460332 containerd[1476]: time="2026-03-14T00:42:42.460146468Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:42.472233 containerd[1476]: time="2026-03-14T00:42:42.472050080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:42.474050 containerd[1476]: time="2026-03-14T00:42:42.472958888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.418758647s" Mar 14 00:42:42.474050 containerd[1476]: time="2026-03-14T00:42:42.473041010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 14 00:42:42.477513 containerd[1476]: time="2026-03-14T00:42:42.477351722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:42:42.484787 containerd[1476]: time="2026-03-14T00:42:42.484697509Z" level=info msg="CreateContainer within sandbox \"0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:42:42.500147 kubelet[2564]: E0314 00:42:42.499640 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:42.515434 containerd[1476]: time="2026-03-14T00:42:42.515209178Z" level=info msg="CreateContainer within sandbox \"0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3706aefa71a3dfe10b413901fd8661b49b5dd05f16f657f2a867fc37dead10f2\"" Mar 14 00:42:42.517766 containerd[1476]: time="2026-03-14T00:42:42.516223219Z" level=info msg="StartContainer for \"3706aefa71a3dfe10b413901fd8661b49b5dd05f16f657f2a867fc37dead10f2\"" Mar 14 00:42:42.621434 systemd[1]: run-containerd-runc-k8s.io-3706aefa71a3dfe10b413901fd8661b49b5dd05f16f657f2a867fc37dead10f2-runc.uzDCms.mount: Deactivated successfully. Mar 14 00:42:42.636167 systemd[1]: Started cri-containerd-3706aefa71a3dfe10b413901fd8661b49b5dd05f16f657f2a867fc37dead10f2.scope - libcontainer container 3706aefa71a3dfe10b413901fd8661b49b5dd05f16f657f2a867fc37dead10f2. Mar 14 00:42:42.768149 containerd[1476]: time="2026-03-14T00:42:42.767187545Z" level=info msg="StartContainer for \"3706aefa71a3dfe10b413901fd8661b49b5dd05f16f657f2a867fc37dead10f2\" returns successfully" Mar 14 00:42:43.622652 systemd[1]: run-containerd-runc-k8s.io-3706aefa71a3dfe10b413901fd8661b49b5dd05f16f657f2a867fc37dead10f2-runc.A8oWun.mount: Deactivated successfully. Mar 14 00:42:44.988988 kubelet[2564]: E0314 00:42:44.988748 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:42:45.338471 containerd[1476]: time="2026-03-14T00:42:45.338352446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:45.369463 containerd[1476]: time="2026-03-14T00:42:45.369327580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 14 00:42:45.376134 containerd[1476]: time="2026-03-14T00:42:45.375999395Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:45.381097 containerd[1476]: time="2026-03-14T00:42:45.380934706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:45.382005 containerd[1476]: time="2026-03-14T00:42:45.381898015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.904419758s" Mar 14 00:42:45.382005 containerd[1476]: time="2026-03-14T00:42:45.381987121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:42:45.385486 containerd[1476]: time="2026-03-14T00:42:45.385206762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:42:45.393711 containerd[1476]: time="2026-03-14T00:42:45.393471305Z" level=info msg="CreateContainer within sandbox \"49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:42:45.423614 containerd[1476]: time="2026-03-14T00:42:45.423372110Z" level=info msg="CreateContainer within sandbox \"49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d769a019ae02e22febdcffb801fe8c269150b4ef7e71d02382fda4121c759e0e\"" Mar 14 00:42:45.424742 containerd[1476]: time="2026-03-14T00:42:45.424624987Z" level=info msg="StartContainer for \"d769a019ae02e22febdcffb801fe8c269150b4ef7e71d02382fda4121c759e0e\"" Mar 14 00:42:45.554316 systemd[1]: Started cri-containerd-d769a019ae02e22febdcffb801fe8c269150b4ef7e71d02382fda4121c759e0e.scope - libcontainer container d769a019ae02e22febdcffb801fe8c269150b4ef7e71d02382fda4121c759e0e. Mar 14 00:42:45.648452 containerd[1476]: time="2026-03-14T00:42:45.648098360Z" level=info msg="StartContainer for \"d769a019ae02e22febdcffb801fe8c269150b4ef7e71d02382fda4121c759e0e\" returns successfully" Mar 14 00:42:46.403633 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:39394.service - OpenSSH per-connection server daemon (10.0.0.1:39394). Mar 14 00:42:46.588764 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 39394 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:42:46.591518 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:42:46.606301 systemd-logind[1459]: New session 9 of user core. Mar 14 00:42:46.623087 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:42:46.664151 kubelet[2564]: I0314 00:42:46.661711 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-5srn6" podStartSLOduration=48.236410652 podStartE2EDuration="52.6616912s" podCreationTimestamp="2026-03-14 00:41:54 +0000 UTC" firstStartedPulling="2026-03-14 00:42:38.050233519 +0000 UTC m=+81.458775508" lastFinishedPulling="2026-03-14 00:42:42.475514068 +0000 UTC m=+85.884056056" observedRunningTime="2026-03-14 00:42:43.592963201 +0000 UTC m=+87.001505199" watchObservedRunningTime="2026-03-14 00:42:46.6616912 +0000 UTC m=+90.070233187" Mar 14 00:42:46.664151 kubelet[2564]: I0314 00:42:46.662947 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cc9b6b4b7-5hht7" podStartSLOduration=46.803514004 podStartE2EDuration="53.662930221s" podCreationTimestamp="2026-03-14 00:41:53 +0000 UTC" firstStartedPulling="2026-03-14 00:42:38.525068983 +0000 UTC m=+81.933610981" lastFinishedPulling="2026-03-14 00:42:45.38448521 +0000 UTC m=+88.793027198" observedRunningTime="2026-03-14 00:42:46.637388716 +0000 UTC m=+90.045930725" watchObservedRunningTime="2026-03-14 00:42:46.662930221 +0000 UTC m=+90.071472239" Mar 14 00:42:47.275926 containerd[1476]: time="2026-03-14T00:42:47.275707079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:47.281165 containerd[1476]: time="2026-03-14T00:42:47.280295602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 14 00:42:47.284636 containerd[1476]: time="2026-03-14T00:42:47.284099866Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:47.305198 sshd[5471]: pam_unix(sshd:session): session closed for user core Mar 14 00:42:47.312612 containerd[1476]: time="2026-03-14T00:42:47.312508848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:47.315184 containerd[1476]: time="2026-03-14T00:42:47.315028908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.929775348s" Mar 14 00:42:47.315184 containerd[1476]: time="2026-03-14T00:42:47.315068872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 14 00:42:47.317799 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:39394.service: Deactivated successfully. Mar 14 00:42:47.320068 containerd[1476]: time="2026-03-14T00:42:47.320029144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:42:47.323778 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:42:47.328887 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:42:47.332467 containerd[1476]: time="2026-03-14T00:42:47.332091155Z" level=info msg="CreateContainer within sandbox \"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:42:47.333022 systemd-logind[1459]: Removed session 9. Mar 14 00:42:47.410985 containerd[1476]: time="2026-03-14T00:42:47.410624945Z" level=info msg="CreateContainer within sandbox \"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"66bb76d9b8c350ad17ef54552d8521ae31c539f5bb243ff45bee574ac0e5ab8b\"" Mar 14 00:42:47.413128 containerd[1476]: time="2026-03-14T00:42:47.412033974Z" level=info msg="StartContainer for \"66bb76d9b8c350ad17ef54552d8521ae31c539f5bb243ff45bee574ac0e5ab8b\"" Mar 14 00:42:47.491741 containerd[1476]: time="2026-03-14T00:42:47.485957523Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:47.491741 containerd[1476]: time="2026-03-14T00:42:47.486029767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:42:47.491741 containerd[1476]: time="2026-03-14T00:42:47.491443070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 171.375435ms" Mar 14 00:42:47.491741 containerd[1476]: time="2026-03-14T00:42:47.491482444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 14 00:42:47.498955 containerd[1476]: time="2026-03-14T00:42:47.495785335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:42:47.509405 containerd[1476]: time="2026-03-14T00:42:47.509358964Z" level=info msg="CreateContainer within sandbox \"3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:42:47.519925 systemd[1]: Started cri-containerd-66bb76d9b8c350ad17ef54552d8521ae31c539f5bb243ff45bee574ac0e5ab8b.scope - libcontainer container 66bb76d9b8c350ad17ef54552d8521ae31c539f5bb243ff45bee574ac0e5ab8b. Mar 14 00:42:47.639126 containerd[1476]: time="2026-03-14T00:42:47.639072866Z" level=info msg="CreateContainer within sandbox \"3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b0acdc2dbfa6184e0973542851690683b2a7eeb20eebfa6b9c6e0fe017ceaf1\"" Mar 14 00:42:47.641002 containerd[1476]: time="2026-03-14T00:42:47.640732120Z" level=info msg="StartContainer for \"8b0acdc2dbfa6184e0973542851690683b2a7eeb20eebfa6b9c6e0fe017ceaf1\"" Mar 14 00:42:47.669380 containerd[1476]: time="2026-03-14T00:42:47.669281448Z" level=info msg="StartContainer for \"66bb76d9b8c350ad17ef54552d8521ae31c539f5bb243ff45bee574ac0e5ab8b\" returns successfully" Mar 14 00:42:47.760342 systemd[1]: Started cri-containerd-8b0acdc2dbfa6184e0973542851690683b2a7eeb20eebfa6b9c6e0fe017ceaf1.scope - libcontainer container 8b0acdc2dbfa6184e0973542851690683b2a7eeb20eebfa6b9c6e0fe017ceaf1. Mar 14 00:42:47.910071 containerd[1476]: time="2026-03-14T00:42:47.909494446Z" level=info msg="StartContainer for \"8b0acdc2dbfa6184e0973542851690683b2a7eeb20eebfa6b9c6e0fe017ceaf1\" returns successfully" Mar 14 00:42:48.408022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917223538.mount: Deactivated successfully. Mar 14 00:42:48.693342 kubelet[2564]: I0314 00:42:48.692338 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cc9b6b4b7-hm5mx" podStartSLOduration=47.255637077 podStartE2EDuration="55.692320831s" podCreationTimestamp="2026-03-14 00:41:53 +0000 UTC" firstStartedPulling="2026-03-14 00:42:39.058428022 +0000 UTC m=+82.466970011" lastFinishedPulling="2026-03-14 00:42:47.495111777 +0000 UTC m=+90.903653765" observedRunningTime="2026-03-14 00:42:48.689510951 +0000 UTC m=+92.098052940" watchObservedRunningTime="2026-03-14 00:42:48.692320831 +0000 UTC m=+92.100862839" Mar 14 00:42:50.519627 containerd[1476]: time="2026-03-14T00:42:50.519373200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:50.522685 containerd[1476]: time="2026-03-14T00:42:50.522614428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 14 00:42:50.526057 containerd[1476]: time="2026-03-14T00:42:50.525055483Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:50.531936 containerd[1476]: time="2026-03-14T00:42:50.531020959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:50.532336 containerd[1476]: time="2026-03-14T00:42:50.531949724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.035828064s" Mar 14 00:42:50.532336 containerd[1476]: time="2026-03-14T00:42:50.531990171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 14 00:42:50.540980 containerd[1476]: time="2026-03-14T00:42:50.540741671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:42:50.600903 containerd[1476]: time="2026-03-14T00:42:50.600446667Z" level=info msg="CreateContainer within sandbox \"9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:42:50.666175 kubelet[2564]: I0314 00:42:50.665942 2564 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:42:50.773315 containerd[1476]: time="2026-03-14T00:42:50.770000714Z" level=info msg="CreateContainer within sandbox \"9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ac771c077c247988da0518a8c8cbeb02841d6f21ce6e147242b46ae9e5d0fbcf\"" Mar 14 00:42:50.773315 containerd[1476]: time="2026-03-14T00:42:50.771474282Z" level=info msg="StartContainer for \"ac771c077c247988da0518a8c8cbeb02841d6f21ce6e147242b46ae9e5d0fbcf\"" Mar 14 00:42:50.917195 systemd[1]: Started cri-containerd-ac771c077c247988da0518a8c8cbeb02841d6f21ce6e147242b46ae9e5d0fbcf.scope - libcontainer container ac771c077c247988da0518a8c8cbeb02841d6f21ce6e147242b46ae9e5d0fbcf. Mar 14 00:42:51.078043 containerd[1476]: time="2026-03-14T00:42:51.075503469Z" level=info msg="StartContainer for \"ac771c077c247988da0518a8c8cbeb02841d6f21ce6e147242b46ae9e5d0fbcf\" returns successfully" Mar 14 00:42:52.317365 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:44664.service - OpenSSH per-connection server daemon (10.0.0.1:44664). Mar 14 00:42:52.442383 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 44664 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:42:52.446200 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:42:52.457138 systemd-logind[1459]: New session 10 of user core. Mar 14 00:42:52.469272 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:42:52.523687 containerd[1476]: time="2026-03-14T00:42:52.523463420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:52.525576 containerd[1476]: time="2026-03-14T00:42:52.525401586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 14 00:42:52.528141 containerd[1476]: time="2026-03-14T00:42:52.528070990Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:52.534310 containerd[1476]: time="2026-03-14T00:42:52.534142235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:42:52.536056 containerd[1476]: time="2026-03-14T00:42:52.535719886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.994938611s" Mar 14 00:42:52.536633 containerd[1476]: time="2026-03-14T00:42:52.536467395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 14 00:42:52.551238 containerd[1476]: time="2026-03-14T00:42:52.549791289Z" level=info msg="CreateContainer within sandbox \"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:42:52.628053 systemd[1]: run-containerd-runc-k8s.io-c7aa132eb281d6761b3b903e28b3379704de6f53efaafe39cf3ac627e60ad8b6-runc.xAa9cs.mount: Deactivated successfully. Mar 14 00:42:52.638952 containerd[1476]: time="2026-03-14T00:42:52.638732752Z" level=info msg="CreateContainer within sandbox \"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e67b7811f68d2d0710aad990e7b83bf5639f27d66b332afc7f7fea24b2b7dab0\"" Mar 14 00:42:52.642576 containerd[1476]: time="2026-03-14T00:42:52.640726436Z" level=info msg="StartContainer for \"e67b7811f68d2d0710aad990e7b83bf5639f27d66b332afc7f7fea24b2b7dab0\"" Mar 14 00:42:52.754256 systemd[1]: Started cri-containerd-e67b7811f68d2d0710aad990e7b83bf5639f27d66b332afc7f7fea24b2b7dab0.scope - libcontainer container e67b7811f68d2d0710aad990e7b83bf5639f27d66b332afc7f7fea24b2b7dab0. Mar 14 00:42:52.923059 containerd[1476]: time="2026-03-14T00:42:52.921003137Z" level=info msg="StartContainer for \"e67b7811f68d2d0710aad990e7b83bf5639f27d66b332afc7f7fea24b2b7dab0\" returns successfully" Mar 14 00:42:53.022737 kubelet[2564]: I0314 00:42:53.021936 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c6bf9555f-grh9k" podStartSLOduration=46.95040108 podStartE2EDuration="58.021800526s" podCreationTimestamp="2026-03-14 00:41:55 +0000 UTC" firstStartedPulling="2026-03-14 00:42:39.467513308 +0000 UTC m=+82.876055316" lastFinishedPulling="2026-03-14 00:42:50.538912774 +0000 UTC m=+93.947454762" observedRunningTime="2026-03-14 00:42:51.701269632 +0000 UTC m=+95.109811630" watchObservedRunningTime="2026-03-14 00:42:53.021800526 +0000 UTC m=+96.430342514" Mar 14 00:42:53.287303 sshd[5656]: pam_unix(sshd:session): session closed for user core Mar 14 00:42:53.294271 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:44664.service: Deactivated successfully. Mar 14 00:42:53.298235 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:42:53.302639 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:42:53.307925 systemd-logind[1459]: Removed session 10. Mar 14 00:42:53.382198 kubelet[2564]: I0314 00:42:53.382033 2564 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:42:53.388591 kubelet[2564]: I0314 00:42:53.388403 2564 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:42:58.334795 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:44674.service - OpenSSH per-connection server daemon (10.0.0.1:44674). Mar 14 00:42:58.428239 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 44674 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:42:58.431086 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:42:58.455708 systemd-logind[1459]: New session 11 of user core. Mar 14 00:42:58.470327 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:42:58.801768 sshd[5755]: pam_unix(sshd:session): session closed for user core Mar 14 00:42:58.819699 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:44674.service: Deactivated successfully. Mar 14 00:42:58.824161 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:42:58.826124 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:42:58.829225 systemd-logind[1459]: Removed session 11. Mar 14 00:43:02.987980 kubelet[2564]: E0314 00:43:02.987691 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:03.839564 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:55726.service - OpenSSH per-connection server daemon (10.0.0.1:55726). Mar 14 00:43:03.937529 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 55726 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:03.940332 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:03.962152 systemd-logind[1459]: New session 12 of user core. Mar 14 00:43:03.970252 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:43:04.191211 sshd[5779]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:04.199785 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:55726.service: Deactivated successfully. Mar 14 00:43:04.203619 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:43:04.205545 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:43:04.208424 systemd-logind[1459]: Removed session 12. Mar 14 00:43:09.217312 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). Mar 14 00:43:09.293120 sshd[5804]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:09.296675 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:09.309562 systemd-logind[1459]: New session 13 of user core. Mar 14 00:43:09.323352 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:43:09.565736 sshd[5804]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:09.573033 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:55734.service: Deactivated successfully. Mar 14 00:43:09.576694 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:43:09.578780 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:43:09.583210 systemd-logind[1459]: Removed session 13. Mar 14 00:43:14.599905 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:58024.service - OpenSSH per-connection server daemon (10.0.0.1:58024). Mar 14 00:43:14.795618 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 58024 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:14.800059 sshd[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:14.814344 systemd-logind[1459]: New session 14 of user core. Mar 14 00:43:14.826548 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:43:14.836612 kubelet[2564]: I0314 00:43:14.832094 2564 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-6px7x" podStartSLOduration=65.988687475 podStartE2EDuration="1m19.832078431s" podCreationTimestamp="2026-03-14 00:41:55 +0000 UTC" firstStartedPulling="2026-03-14 00:42:38.694703288 +0000 UTC m=+82.103245276" lastFinishedPulling="2026-03-14 00:42:52.538094244 +0000 UTC m=+95.946636232" observedRunningTime="2026-03-14 00:42:53.739965228 +0000 UTC m=+97.148507236" watchObservedRunningTime="2026-03-14 00:43:14.832078431 +0000 UTC m=+118.240620420" Mar 14 00:43:15.071959 sshd[5825]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:15.082064 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:58024.service: Deactivated successfully. Mar 14 00:43:15.085708 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:43:15.087340 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:43:15.090798 systemd-logind[1459]: Removed session 14. Mar 14 00:43:16.913056 containerd[1476]: time="2026-03-14T00:43:16.913005007Z" level=info msg="StopPodSandbox for \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\"" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.094 [WARNING][5864] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"f9092168-e790-4005-b3c3-2b628a935681", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d", Pod:"calico-apiserver-6cc9b6b4b7-5hht7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali99d7a86392b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.095 [INFO][5864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.095 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" iface="eth0" netns="" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.095 [INFO][5864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.095 [INFO][5864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.301 [INFO][5875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.305 [INFO][5875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.307 [INFO][5875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.324 [WARNING][5875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.324 [INFO][5875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.329 [INFO][5875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:17.340123 containerd[1476]: 2026-03-14 00:43:17.334 [INFO][5864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.379778 containerd[1476]: time="2026-03-14T00:43:17.379353256Z" level=info msg="TearDown network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\" successfully" Mar 14 00:43:17.379778 containerd[1476]: time="2026-03-14T00:43:17.379528452Z" level=info msg="StopPodSandbox for \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\" returns successfully" Mar 14 00:43:17.425203 containerd[1476]: time="2026-03-14T00:43:17.425007426Z" level=info msg="RemovePodSandbox for \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\"" Mar 14 00:43:17.432675 containerd[1476]: time="2026-03-14T00:43:17.431689212Z" level=info msg="Forcibly stopping sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\"" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.584 [WARNING][5892] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"f9092168-e790-4005-b3c3-2b628a935681", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49da81bb870bfad874c91a8011ad5ebd4ad28bd55ba81a5d7a00c76a5553d50d", Pod:"calico-apiserver-6cc9b6b4b7-5hht7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali99d7a86392b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.584 [INFO][5892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.584 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" iface="eth0" netns="" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.585 [INFO][5892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.585 [INFO][5892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.657 [INFO][5900] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.658 [INFO][5900] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.658 [INFO][5900] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.674 [WARNING][5900] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.674 [INFO][5900] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" HandleID="k8s-pod-network.6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--5hht7-eth0" Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.678 [INFO][5900] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:17.694018 containerd[1476]: 2026-03-14 00:43:17.685 [INFO][5892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666" Mar 14 00:43:17.694018 containerd[1476]: time="2026-03-14T00:43:17.693956067Z" level=info msg="TearDown network for sandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\" successfully" Mar 14 00:43:17.763921 containerd[1476]: time="2026-03-14T00:43:17.763653967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:17.764184 containerd[1476]: time="2026-03-14T00:43:17.763918238Z" level=info msg="RemovePodSandbox \"6634748c75e5abffc7770b1d59a6a82c8f0d23b762e3b98290a49dfa63024666\" returns successfully" Mar 14 00:43:17.779055 containerd[1476]: time="2026-03-14T00:43:17.777794515Z" level=info msg="StopPodSandbox for \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\"" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:17.917 [WARNING][5918] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6px7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fec3ef51-27dd-462a-9b02-64ae702e6505", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd", Pod:"csi-node-driver-6px7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaca262809ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:17.918 [INFO][5918] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:17.918 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" iface="eth0" netns="" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:17.918 [INFO][5918] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:17.918 [INFO][5918] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:18.014 [INFO][5926] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:18.014 [INFO][5926] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:18.015 [INFO][5926] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:18.033 [WARNING][5926] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:18.033 [INFO][5926] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:18.039 [INFO][5926] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:18.070298 containerd[1476]: 2026-03-14 00:43:18.063 [INFO][5918] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.070298 containerd[1476]: time="2026-03-14T00:43:18.070248236Z" level=info msg="TearDown network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\" successfully" Mar 14 00:43:18.070298 containerd[1476]: time="2026-03-14T00:43:18.070282831Z" level=info msg="StopPodSandbox for \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\" returns successfully" Mar 14 00:43:18.074066 containerd[1476]: time="2026-03-14T00:43:18.073715796Z" level=info msg="RemovePodSandbox for \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\"" Mar 14 00:43:18.074066 containerd[1476]: time="2026-03-14T00:43:18.073973575Z" level=info msg="Forcibly stopping sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\"" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.227 [WARNING][5943] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6px7x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fec3ef51-27dd-462a-9b02-64ae702e6505", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d8fec0e33086e080b337ef3aeb8d3a0fecfdfd60aae8193c3dfa5a63636f2bd", Pod:"csi-node-driver-6px7x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaca262809ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.228 [INFO][5943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.228 [INFO][5943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" iface="eth0" netns="" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.228 [INFO][5943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.228 [INFO][5943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.309 [INFO][5952] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.310 [INFO][5952] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.310 [INFO][5952] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.324 [WARNING][5952] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.324 [INFO][5952] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" HandleID="k8s-pod-network.3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Workload="localhost-k8s-csi--node--driver--6px7x-eth0" Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.329 [INFO][5952] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:18.339013 containerd[1476]: 2026-03-14 00:43:18.333 [INFO][5943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f" Mar 14 00:43:18.339013 containerd[1476]: time="2026-03-14T00:43:18.338679632Z" level=info msg="TearDown network for sandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\" successfully" Mar 14 00:43:18.370081 containerd[1476]: time="2026-03-14T00:43:18.368951260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:18.370081 containerd[1476]: time="2026-03-14T00:43:18.369088325Z" level=info msg="RemovePodSandbox \"3284ebeb76af5de00e6edb60fcf6afc915b3b4257270cbe62b29f64ba20e4b2f\" returns successfully" Mar 14 00:43:18.372671 containerd[1476]: time="2026-03-14T00:43:18.371790919Z" level=info msg="StopPodSandbox for \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\"" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.487 [WARNING][5969] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"bc2b7217-6287-4581-a381-8909f4a1c133", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445", Pod:"calico-apiserver-6cc9b6b4b7-hm5mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf922a010e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.489 [INFO][5969] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.489 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" iface="eth0" netns="" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.489 [INFO][5969] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.490 [INFO][5969] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.603 [INFO][5978] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.604 [INFO][5978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.604 [INFO][5978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.615 [WARNING][5978] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.615 [INFO][5978] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.619 [INFO][5978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:18.637261 containerd[1476]: 2026-03-14 00:43:18.627 [INFO][5969] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.637261 containerd[1476]: time="2026-03-14T00:43:18.637088934Z" level=info msg="TearDown network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\" successfully" Mar 14 00:43:18.637261 containerd[1476]: time="2026-03-14T00:43:18.637122367Z" level=info msg="StopPodSandbox for \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\" returns successfully" Mar 14 00:43:18.640110 containerd[1476]: time="2026-03-14T00:43:18.640010598Z" level=info msg="RemovePodSandbox for \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\"" Mar 14 00:43:18.640110 containerd[1476]: time="2026-03-14T00:43:18.640058728Z" level=info msg="Forcibly stopping sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\"" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.793 [WARNING][5996] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0", GenerateName:"calico-apiserver-6cc9b6b4b7-", Namespace:"calico-system", SelfLink:"", UID:"bc2b7217-6287-4581-a381-8909f4a1c133", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc9b6b4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ec54674f702ceeafdfddce5dae5cd59cf21ba68d4b22e1330d30e15b72c2445", Pod:"calico-apiserver-6cc9b6b4b7-hm5mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidf922a010e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.793 [INFO][5996] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.793 [INFO][5996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" iface="eth0" netns="" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.793 [INFO][5996] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.793 [INFO][5996] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.871 [INFO][6004] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.871 [INFO][6004] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.871 [INFO][6004] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.900 [WARNING][6004] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.900 [INFO][6004] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" HandleID="k8s-pod-network.6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Workload="localhost-k8s-calico--apiserver--6cc9b6b4b7--hm5mx-eth0" Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.908 [INFO][6004] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:18.921174 containerd[1476]: 2026-03-14 00:43:18.914 [INFO][5996] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627" Mar 14 00:43:18.921174 containerd[1476]: time="2026-03-14T00:43:18.920017253Z" level=info msg="TearDown network for sandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\" successfully" Mar 14 00:43:18.928690 containerd[1476]: time="2026-03-14T00:43:18.928281866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:18.928690 containerd[1476]: time="2026-03-14T00:43:18.928493269Z" level=info msg="RemovePodSandbox \"6f7a917baf2b562381838baeff96eb8396dfa6304b774246de38b1ad00ace627\" returns successfully" Mar 14 00:43:18.930192 containerd[1476]: time="2026-03-14T00:43:18.929916715Z" level=info msg="StopPodSandbox for \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\"" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.070 [WARNING][6021] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--d76st-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2b85a316-b0b6-4829-b096-e789a4c3c1b0", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d", Pod:"coredns-7d764666f9-d76st", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b1475f8735", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.070 [INFO][6021] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.070 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" iface="eth0" netns="" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.070 [INFO][6021] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.070 [INFO][6021] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.140 [INFO][6029] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.140 [INFO][6029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.140 [INFO][6029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.163 [WARNING][6029] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.163 [INFO][6029] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.168 [INFO][6029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:19.178572 containerd[1476]: 2026-03-14 00:43:19.172 [INFO][6021] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.178572 containerd[1476]: time="2026-03-14T00:43:19.178174207Z" level=info msg="TearDown network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\" successfully" Mar 14 00:43:19.178572 containerd[1476]: time="2026-03-14T00:43:19.178212277Z" level=info msg="StopPodSandbox for \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\" returns successfully" Mar 14 00:43:19.181263 containerd[1476]: time="2026-03-14T00:43:19.179377151Z" level=info msg="RemovePodSandbox for \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\"" Mar 14 00:43:19.181263 containerd[1476]: time="2026-03-14T00:43:19.179508344Z" level=info msg="Forcibly stopping sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\"" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.330 [WARNING][6052] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--d76st-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2b85a316-b0b6-4829-b096-e789a4c3c1b0", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53bd2c850929048e0d236497f6b5f39d100048fca029f2364d7abdd23f99352d", Pod:"coredns-7d764666f9-d76st", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b1475f8735", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.331 [INFO][6052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.331 [INFO][6052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" iface="eth0" netns="" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.331 [INFO][6052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.331 [INFO][6052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.386 [INFO][6093] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.387 [INFO][6093] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.387 [INFO][6093] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.398 [WARNING][6093] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.398 [INFO][6093] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" HandleID="k8s-pod-network.fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Workload="localhost-k8s-coredns--7d764666f9--d76st-eth0" Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.401 [INFO][6093] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:19.411588 containerd[1476]: 2026-03-14 00:43:19.405 [INFO][6052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf" Mar 14 00:43:19.411588 containerd[1476]: time="2026-03-14T00:43:19.411526719Z" level=info msg="TearDown network for sandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\" successfully" Mar 14 00:43:19.418983 containerd[1476]: time="2026-03-14T00:43:19.418916314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:19.419192 containerd[1476]: time="2026-03-14T00:43:19.419043822Z" level=info msg="RemovePodSandbox \"fb71757549a153ad31bfbef1ec0c2c54a7e23e6db0af0a92ac017f39702db0bf\" returns successfully" Mar 14 00:43:19.420076 containerd[1476]: time="2026-03-14T00:43:19.419997774Z" level=info msg="StopPodSandbox for \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\"" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.495 [WARNING][6111] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" WorkloadEndpoint="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.496 [INFO][6111] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.496 [INFO][6111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" iface="eth0" netns="" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.496 [INFO][6111] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.496 [INFO][6111] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.552 [INFO][6119] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.552 [INFO][6119] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.552 [INFO][6119] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.565 [WARNING][6119] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.565 [INFO][6119] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.568 [INFO][6119] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:19.581086 containerd[1476]: 2026-03-14 00:43:19.574 [INFO][6111] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.581086 containerd[1476]: time="2026-03-14T00:43:19.581021485Z" level=info msg="TearDown network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\" successfully" Mar 14 00:43:19.581086 containerd[1476]: time="2026-03-14T00:43:19.581054066Z" level=info msg="StopPodSandbox for \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\" returns successfully" Mar 14 00:43:19.593893 containerd[1476]: time="2026-03-14T00:43:19.593681783Z" level=info msg="RemovePodSandbox for \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\"" Mar 14 00:43:19.593893 containerd[1476]: time="2026-03-14T00:43:19.593773813Z" level=info msg="Forcibly stopping sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\"" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.679 [WARNING][6136] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" WorkloadEndpoint="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.680 [INFO][6136] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.680 [INFO][6136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" iface="eth0" netns="" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.680 [INFO][6136] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.680 [INFO][6136] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.758 [INFO][6145] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.758 [INFO][6145] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.758 [INFO][6145] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.770 [WARNING][6145] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.770 [INFO][6145] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" HandleID="k8s-pod-network.54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Workload="localhost-k8s-whisker--59546697b8--v628l-eth0" Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.775 [INFO][6145] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:19.783743 containerd[1476]: 2026-03-14 00:43:19.779 [INFO][6136] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f" Mar 14 00:43:19.783743 containerd[1476]: time="2026-03-14T00:43:19.783098012Z" level=info msg="TearDown network for sandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\" successfully" Mar 14 00:43:19.791108 containerd[1476]: time="2026-03-14T00:43:19.790933206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:19.791108 containerd[1476]: time="2026-03-14T00:43:19.791061004Z" level=info msg="RemovePodSandbox \"54d9efac94085eca9294123bfd5f1eedae7cab4811f10eae87614a49f93b408f\" returns successfully" Mar 14 00:43:19.792545 containerd[1476]: time="2026-03-14T00:43:19.792328913Z" level=info msg="StopPodSandbox for \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\"" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.884 [WARNING][6161] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--7zsnh-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c89d384-afe9-4799-8684-82936ec1efea", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24", Pod:"coredns-7d764666f9-7zsnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif02273d99b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.885 [INFO][6161] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.885 [INFO][6161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" iface="eth0" netns="" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.885 [INFO][6161] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.885 [INFO][6161] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.975 [INFO][6170] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.975 [INFO][6170] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.976 [INFO][6170] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.988 [WARNING][6170] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.989 [INFO][6170] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.992 [INFO][6170] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:19.999592 containerd[1476]: 2026-03-14 00:43:19.995 [INFO][6161] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:20.001333 containerd[1476]: time="2026-03-14T00:43:19.999662784Z" level=info msg="TearDown network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\" successfully" Mar 14 00:43:20.001333 containerd[1476]: time="2026-03-14T00:43:19.999699091Z" level=info msg="StopPodSandbox for \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\" returns successfully" Mar 14 00:43:20.001333 containerd[1476]: time="2026-03-14T00:43:20.000776987Z" level=info msg="RemovePodSandbox for \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\"" Mar 14 00:43:20.001333 containerd[1476]: time="2026-03-14T00:43:20.000890918Z" level=info msg="Forcibly stopping sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\"" Mar 14 00:43:20.086142 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:60936.service - OpenSSH per-connection server daemon (10.0.0.1:60936). Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.072 [WARNING][6187] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--7zsnh-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c89d384-afe9-4799-8684-82936ec1efea", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf48a10e3537c698db03c806822869105ff9f9d92ca31c23d5de906f41df5f24", Pod:"coredns-7d764666f9-7zsnh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif02273d99b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.073 [INFO][6187] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.073 [INFO][6187] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" iface="eth0" netns="" Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.073 [INFO][6187] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.073 [INFO][6187] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.117 [INFO][6196] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.117 [INFO][6196] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.117 [INFO][6196] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.128 [WARNING][6196] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.128 [INFO][6196] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" HandleID="k8s-pod-network.b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Workload="localhost-k8s-coredns--7d764666f9--7zsnh-eth0" Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.132 [INFO][6196] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:20.139689 containerd[1476]: 2026-03-14 00:43:20.135 [INFO][6187] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21" Mar 14 00:43:20.139689 containerd[1476]: time="2026-03-14T00:43:20.139624650Z" level=info msg="TearDown network for sandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\" successfully" Mar 14 00:43:20.154088 containerd[1476]: time="2026-03-14T00:43:20.153930761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:20.154088 containerd[1476]: time="2026-03-14T00:43:20.154079178Z" level=info msg="RemovePodSandbox \"b53d1cef7d4a6ce76de78b5e7a5104e5254de5c82162ad5daeb08d3bc8b34e21\" returns successfully" Mar 14 00:43:20.160297 containerd[1476]: time="2026-03-14T00:43:20.160045376Z" level=info msg="StopPodSandbox for \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\"" Mar 14 00:43:20.192642 sshd[6201]: Accepted publickey for core from 10.0.0.1 port 60936 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:20.196267 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:20.207292 systemd-logind[1459]: New session 15 of user core. Mar 14 00:43:20.210247 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.245 [WARNING][6215] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--5srn6-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9976b22-5c21-48c9-8ce0-e8ba67196cf5", ResourceVersion:"1334", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12", Pod:"goldmane-9f7667bb8-5srn6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d20d2524e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.249 [INFO][6215] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.249 [INFO][6215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" iface="eth0" netns="" Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.249 [INFO][6215] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.249 [INFO][6215] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.334 [INFO][6225] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.338 [INFO][6225] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.338 [INFO][6225] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.355 [WARNING][6225] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.355 [INFO][6225] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.358 [INFO][6225] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:20.366288 containerd[1476]: 2026-03-14 00:43:20.362 [INFO][6215] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.367976 containerd[1476]: time="2026-03-14T00:43:20.367104416Z" level=info msg="TearDown network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\" successfully" Mar 14 00:43:20.367976 containerd[1476]: time="2026-03-14T00:43:20.367143328Z" level=info msg="StopPodSandbox for \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\" returns successfully" Mar 14 00:43:20.368782 containerd[1476]: time="2026-03-14T00:43:20.368270667Z" level=info msg="RemovePodSandbox for \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\"" Mar 14 00:43:20.368782 containerd[1476]: time="2026-03-14T00:43:20.368307306Z" level=info msg="Forcibly stopping sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\"" Mar 14 00:43:20.502233 sshd[6201]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:20.514301 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:60936.service: Deactivated successfully. Mar 14 00:43:20.518524 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:43:20.521732 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.451 [WARNING][6251] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--5srn6-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c9976b22-5c21-48c9-8ce0-e8ba67196cf5", ResourceVersion:"1334", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ee65fbd84028225028614e078cf2ba0c2ea6e643dfab3a53ba3b1dcaf194e12", Pod:"goldmane-9f7667bb8-5srn6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d20d2524e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.452 [INFO][6251] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.452 [INFO][6251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" iface="eth0" netns="" Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.452 [INFO][6251] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.452 [INFO][6251] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.503 [INFO][6261] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.503 [INFO][6261] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.503 [INFO][6261] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.517 [WARNING][6261] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.517 [INFO][6261] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" HandleID="k8s-pod-network.60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Workload="localhost-k8s-goldmane--9f7667bb8--5srn6-eth0" Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.521 [INFO][6261] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:20.531571 containerd[1476]: 2026-03-14 00:43:20.526 [INFO][6251] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad" Mar 14 00:43:20.531571 containerd[1476]: time="2026-03-14T00:43:20.531456384Z" level=info msg="TearDown network for sandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\" successfully" Mar 14 00:43:20.534304 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:60946.service - OpenSSH per-connection server daemon (10.0.0.1:60946). Mar 14 00:43:20.536603 systemd-logind[1459]: Removed session 15. Mar 14 00:43:20.538975 containerd[1476]: time="2026-03-14T00:43:20.538762781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:20.539176 containerd[1476]: time="2026-03-14T00:43:20.538996004Z" level=info msg="RemovePodSandbox \"60ad6f7f906533830e93674683973314158e3bc2d84558a3500bc55870a222ad\" returns successfully" Mar 14 00:43:20.539958 containerd[1476]: time="2026-03-14T00:43:20.539911046Z" level=info msg="StopPodSandbox for \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\"" Mar 14 00:43:20.606658 sshd[6272]: Accepted publickey for core from 10.0.0.1 port 60946 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:20.608321 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:20.617961 systemd-logind[1459]: New session 16 of user core. Mar 14 00:43:20.625099 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.614 [WARNING][6283] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0", GenerateName:"calico-kube-controllers-c6bf9555f-", Namespace:"calico-system", SelfLink:"", UID:"3d73419e-01c4-49cb-a46c-827ae2b5174f", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6bf9555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167", Pod:"calico-kube-controllers-c6bf9555f-grh9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5d0ccf0978", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.614 [INFO][6283] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.614 [INFO][6283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" iface="eth0" netns="" Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.614 [INFO][6283] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.615 [INFO][6283] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.683 [INFO][6294] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.683 [INFO][6294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.683 [INFO][6294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.694 [WARNING][6294] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.694 [INFO][6294] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.700 [INFO][6294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:20.713700 containerd[1476]: 2026-03-14 00:43:20.709 [INFO][6283] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.713700 containerd[1476]: time="2026-03-14T00:43:20.713681507Z" level=info msg="TearDown network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\" successfully" Mar 14 00:43:20.714373 containerd[1476]: time="2026-03-14T00:43:20.713716982Z" level=info msg="StopPodSandbox for \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\" returns successfully" Mar 14 00:43:20.714373 containerd[1476]: time="2026-03-14T00:43:20.714359648Z" level=info msg="RemovePodSandbox for \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\"" Mar 14 00:43:20.714582 containerd[1476]: time="2026-03-14T00:43:20.714446400Z" level=info msg="Forcibly stopping sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\"" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.808 [WARNING][6318] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0", GenerateName:"calico-kube-controllers-c6bf9555f-", Namespace:"calico-system", SelfLink:"", UID:"3d73419e-01c4-49cb-a46c-827ae2b5174f", ResourceVersion:"1248", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 41, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6bf9555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a7c72966473ce1da035d53cda7c2f3799e337f999a2cf20751611c6f1b53167", Pod:"calico-kube-controllers-c6bf9555f-grh9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid5d0ccf0978", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.809 [INFO][6318] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.809 [INFO][6318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" iface="eth0" netns="" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.809 [INFO][6318] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.809 [INFO][6318] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.893 [INFO][6326] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.894 [INFO][6326] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.894 [INFO][6326] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.908 [WARNING][6326] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.908 [INFO][6326] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" HandleID="k8s-pod-network.91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Workload="localhost-k8s-calico--kube--controllers--c6bf9555f--grh9k-eth0" Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.912 [INFO][6326] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:43:20.923241 containerd[1476]: 2026-03-14 00:43:20.917 [INFO][6318] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472" Mar 14 00:43:20.924482 containerd[1476]: time="2026-03-14T00:43:20.924322521Z" level=info msg="TearDown network for sandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\" successfully" Mar 14 00:43:20.938602 containerd[1476]: time="2026-03-14T00:43:20.935625889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:43:20.938602 containerd[1476]: time="2026-03-14T00:43:20.935730775Z" level=info msg="RemovePodSandbox \"91b126cc3d2eef282e007100c21aa9233a76698ae1cf08b64e114af79dbc2472\" returns successfully" Mar 14 00:43:20.975945 sshd[6272]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:20.987657 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:60946.service: Deactivated successfully. Mar 14 00:43:20.993073 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:43:20.995448 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:43:21.008343 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:60952.service - OpenSSH per-connection server daemon (10.0.0.1:60952). Mar 14 00:43:21.013074 systemd-logind[1459]: Removed session 16. Mar 14 00:43:21.101269 sshd[6338]: Accepted publickey for core from 10.0.0.1 port 60952 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:21.103787 sshd[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:21.114304 systemd-logind[1459]: New session 17 of user core. Mar 14 00:43:21.119176 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:43:21.373370 sshd[6338]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:21.383042 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:60952.service: Deactivated successfully. Mar 14 00:43:21.386686 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:43:21.393677 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:43:21.395995 systemd-logind[1459]: Removed session 17. Mar 14 00:43:26.388730 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:60960.service - OpenSSH per-connection server daemon (10.0.0.1:60960). Mar 14 00:43:26.432099 sshd[6419]: Accepted publickey for core from 10.0.0.1 port 60960 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:26.434704 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:26.441766 systemd-logind[1459]: New session 18 of user core. Mar 14 00:43:26.455204 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:43:26.604533 sshd[6419]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:26.611512 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:60960.service: Deactivated successfully. Mar 14 00:43:26.614145 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:43:26.615356 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:43:26.617421 systemd-logind[1459]: Removed session 18. Mar 14 00:43:30.988059 kubelet[2564]: E0314 00:43:30.987711 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:31.647179 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:47378.service - OpenSSH per-connection server daemon (10.0.0.1:47378). Mar 14 00:43:31.725315 sshd[6435]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:31.729577 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:31.745541 systemd-logind[1459]: New session 19 of user core. Mar 14 00:43:31.754514 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:43:32.229153 sshd[6435]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:32.241878 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:47378.service: Deactivated successfully. Mar 14 00:43:32.244337 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:43:32.248599 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:43:32.274538 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:47386.service - OpenSSH per-connection server daemon (10.0.0.1:47386). Mar 14 00:43:32.278940 systemd-logind[1459]: Removed session 19. Mar 14 00:43:32.327917 sshd[6449]: Accepted publickey for core from 10.0.0.1 port 47386 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:32.329625 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:32.339275 systemd-logind[1459]: New session 20 of user core. Mar 14 00:43:32.357126 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:43:33.075936 sshd[6449]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:33.093798 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:47386.service: Deactivated successfully. Mar 14 00:43:33.096591 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:43:33.099306 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:43:33.107451 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:47392.service - OpenSSH per-connection server daemon (10.0.0.1:47392). Mar 14 00:43:33.108915 systemd-logind[1459]: Removed session 20. Mar 14 00:43:33.192079 sshd[6461]: Accepted publickey for core from 10.0.0.1 port 47392 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:33.194286 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:33.201993 systemd-logind[1459]: New session 21 of user core. Mar 14 00:43:33.214073 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:43:34.159662 sshd[6461]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:34.172684 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:47392.service: Deactivated successfully. Mar 14 00:43:34.177089 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:43:34.182438 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:43:34.191961 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:47400.service - OpenSSH per-connection server daemon (10.0.0.1:47400). Mar 14 00:43:34.195344 systemd-logind[1459]: Removed session 21. Mar 14 00:43:34.247638 sshd[6491]: Accepted publickey for core from 10.0.0.1 port 47400 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:34.250083 sshd[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:34.257920 systemd-logind[1459]: New session 22 of user core. Mar 14 00:43:34.269081 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:43:34.820324 sshd[6491]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:34.832583 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:47400.service: Deactivated successfully. Mar 14 00:43:34.835596 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:43:34.840885 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:43:34.848337 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:47414.service - OpenSSH per-connection server daemon (10.0.0.1:47414). Mar 14 00:43:34.850585 systemd-logind[1459]: Removed session 22. Mar 14 00:43:34.927060 sshd[6503]: Accepted publickey for core from 10.0.0.1 port 47414 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:34.929495 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:34.936595 systemd-logind[1459]: New session 23 of user core. Mar 14 00:43:34.948427 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:43:35.123622 sshd[6503]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:35.129247 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:47414.service: Deactivated successfully. Mar 14 00:43:35.132234 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:43:35.135734 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:43:35.138040 systemd-logind[1459]: Removed session 23. Mar 14 00:43:40.142450 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:43456.service - OpenSSH per-connection server daemon (10.0.0.1:43456). Mar 14 00:43:40.188716 sshd[6520]: Accepted publickey for core from 10.0.0.1 port 43456 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:40.190877 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:40.197222 systemd-logind[1459]: New session 24 of user core. Mar 14 00:43:40.211213 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:43:40.371257 sshd[6520]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:40.379333 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:43456.service: Deactivated successfully. Mar 14 00:43:40.382733 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:43:40.384973 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:43:40.387909 systemd-logind[1459]: Removed session 24. Mar 14 00:43:42.994921 kubelet[2564]: E0314 00:43:42.994735 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:45.383022 systemd[1]: Started sshd@24-10.0.0.138:22-10.0.0.1:43460.service - OpenSSH per-connection server daemon (10.0.0.1:43460). Mar 14 00:43:45.457555 sshd[6557]: Accepted publickey for core from 10.0.0.1 port 43460 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:45.460668 sshd[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:45.467320 systemd-logind[1459]: New session 25 of user core. Mar 14 00:43:45.477168 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:43:45.645303 sshd[6557]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:45.650416 systemd[1]: sshd@24-10.0.0.138:22-10.0.0.1:43460.service: Deactivated successfully. Mar 14 00:43:45.653137 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:43:45.654574 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:43:45.656615 systemd-logind[1459]: Removed session 25. Mar 14 00:43:45.985584 kubelet[2564]: E0314 00:43:45.985287 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:43:50.658750 systemd[1]: Started sshd@25-10.0.0.138:22-10.0.0.1:45542.service - OpenSSH per-connection server daemon (10.0.0.1:45542). Mar 14 00:43:50.703698 sshd[6581]: Accepted publickey for core from 10.0.0.1 port 45542 ssh2: RSA SHA256:WktYvSm9KHvviWJXHDElp60Y4FBKGH2yvUT9Trcim0s Mar 14 00:43:50.705573 sshd[6581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:43:50.711456 systemd-logind[1459]: New session 26 of user core. Mar 14 00:43:50.723015 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:43:50.853144 sshd[6581]: pam_unix(sshd:session): session closed for user core Mar 14 00:43:50.857387 systemd[1]: sshd@25-10.0.0.138:22-10.0.0.1:45542.service: Deactivated successfully. Mar 14 00:43:50.859451 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:43:50.861648 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:43:50.863475 systemd-logind[1459]: Removed session 26.